Test Report: KVM_Linux_crio 17240

                    
                      ca8bf15b503bfa796ca02bce755f3a2820b75eb7:2023-09-19:31081
                    
                

Test fail (27/287)

Order failed test Duration
25 TestAddons/parallel/Ingress 155.78
36 TestAddons/StoppedEnableDisable 155.68
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.18
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 169.75
200 TestMultiNode/serial/PingHostFrom2Pods 3.13
206 TestMultiNode/serial/RestartKeepsNodes 688.24
208 TestMultiNode/serial/StopMultiNode 142.74
215 TestPreload 263.79
221 TestRunningBinaryUpgrade 163.5
226 TestStoppedBinaryUpgrade/Upgrade 321.73
241 TestPause/serial/SecondStartNoReconfiguration 101.65
260 TestStartStop/group/old-k8s-version/serial/FirstStart 584.44
267 TestStartStop/group/embed-certs/serial/Stop 140.2
272 TestStartStop/group/no-preload/serial/Stop 140.06
275 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.58
276 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
278 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
280 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
284 TestStartStop/group/old-k8s-version/serial/Stop 139.41
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.51
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.51
289 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.44
290 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 522.63
291 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 327.99
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 349.91
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 343.48
x
+
TestAddons/parallel/Ingress (155.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-897988 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-897988 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-897988 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [88ac2608-6ed2-49a6-97a1-9efaf2f4b32d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [88ac2608-6ed2-49a6-97a1-9efaf2f4b32d] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.026887209s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-897988 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.149088844s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-897988 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.206
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-897988 addons disable ingress-dns --alsologtostderr -v=1: (1.892501452s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-897988 addons disable ingress --alsologtostderr -v=1: (7.801297071s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-897988 -n addons-897988
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-897988 logs -n 25: (1.336041108s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:34 UTC |                     |
	|         | -p download-only-698254        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC |                     |
	|         | -p download-only-698254        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC | 19 Sep 23 16:35 UTC |
	| delete  | -p download-only-698254        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC | 19 Sep 23 16:35 UTC |
	| delete  | -p download-only-698254        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC | 19 Sep 23 16:35 UTC |
	| start   | --download-only -p             | binary-mirror-912336 | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC |                     |
	|         | binary-mirror-912336           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:32843         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-912336        | binary-mirror-912336 | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC | 19 Sep 23 16:35 UTC |
	| start   | -p addons-897988               | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC | 19 Sep 23 16:38 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	|         | addons-897988                  |                      |         |         |                     |                     |
	| addons  | addons-897988 addons           | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	|         | -p addons-897988               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	|         | addons-897988                  |                      |         |         |                     |                     |
	| ip      | addons-897988 ip               | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	| addons  | addons-897988 addons disable   | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-897988 addons disable   | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC | 19 Sep 23 16:38 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ssh     | addons-897988 ssh curl -s      | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-897988 addons           | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:39 UTC | 19 Sep 23 16:39 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-897988 addons           | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:39 UTC | 19 Sep 23 16:39 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-897988 ip               | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:41 UTC | 19 Sep 23 16:41 UTC |
	| addons  | addons-897988 addons disable   | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:41 UTC | 19 Sep 23 16:41 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-897988 addons disable   | addons-897988        | jenkins | v1.31.2 | 19 Sep 23 16:41 UTC | 19 Sep 23 16:41 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:35:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:35:49.005372   13741 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:35:49.005613   13741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:35:49.005623   13741 out.go:309] Setting ErrFile to fd 2...
	I0919 16:35:49.005628   13741 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:35:49.005798   13741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 16:35:49.006335   13741 out.go:303] Setting JSON to false
	I0919 16:35:49.007128   13741 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1099,"bootTime":1695140250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:35:49.007188   13741 start.go:138] virtualization: kvm guest
	I0919 16:35:49.009754   13741 out.go:177] * [addons-897988] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:35:49.011435   13741 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:35:49.011383   13741 notify.go:220] Checking for updates...
	I0919 16:35:49.013098   13741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:35:49.014609   13741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:35:49.016096   13741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:35:49.017682   13741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:35:49.019122   13741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:35:49.020736   13741 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:35:49.051363   13741 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 16:35:49.052905   13741 start.go:298] selected driver: kvm2
	I0919 16:35:49.052918   13741 start.go:902] validating driver "kvm2" against <nil>
	I0919 16:35:49.052932   13741 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:35:49.053846   13741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:35:49.053936   13741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:35:49.067739   13741 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:35:49.067782   13741 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 16:35:49.067959   13741 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 16:35:49.067988   13741 cni.go:84] Creating CNI manager for ""
	I0919 16:35:49.067995   13741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:35:49.068004   13741 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 16:35:49.068011   13741 start_flags.go:321] config:
	{Name:addons-897988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-897988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:35:49.068109   13741 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:35:49.069993   13741 out.go:177] * Starting control plane node addons-897988 in cluster addons-897988
	I0919 16:35:49.071345   13741 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:35:49.071370   13741 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 16:35:49.071376   13741 cache.go:57] Caching tarball of preloaded images
	I0919 16:35:49.071452   13741 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 16:35:49.071466   13741 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 16:35:49.071743   13741 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/config.json ...
	I0919 16:35:49.071762   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/config.json: {Name:mkc66bbc44f9ac26decab994963f01d0d5cd3647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:35:49.071897   13741 start.go:365] acquiring machines lock for addons-897988: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 16:35:49.071951   13741 start.go:369] acquired machines lock for "addons-897988" in 37.864µs
	I0919 16:35:49.071974   13741 start.go:93] Provisioning new machine with config: &{Name:addons-897988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:addons-897988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 16:35:49.072030   13741 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 16:35:49.073812   13741 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0919 16:35:49.073910   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:35:49.073954   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:35:49.087169   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0919 16:35:49.087613   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:35:49.088115   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:35:49.088134   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:35:49.088523   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:35:49.088693   13741 main.go:141] libmachine: (addons-897988) Calling .GetMachineName
	I0919 16:35:49.088848   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:35:49.088989   13741 start.go:159] libmachine.API.Create for "addons-897988" (driver="kvm2")
	I0919 16:35:49.089019   13741 client.go:168] LocalClient.Create starting
	I0919 16:35:49.089051   13741 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 16:35:49.203055   13741 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 16:35:49.372868   13741 main.go:141] libmachine: Running pre-create checks...
	I0919 16:35:49.372890   13741 main.go:141] libmachine: (addons-897988) Calling .PreCreateCheck
	I0919 16:35:49.373393   13741 main.go:141] libmachine: (addons-897988) Calling .GetConfigRaw
	I0919 16:35:49.373830   13741 main.go:141] libmachine: Creating machine...
	I0919 16:35:49.373844   13741 main.go:141] libmachine: (addons-897988) Calling .Create
	I0919 16:35:49.373976   13741 main.go:141] libmachine: (addons-897988) Creating KVM machine...
	I0919 16:35:49.375223   13741 main.go:141] libmachine: (addons-897988) DBG | found existing default KVM network
	I0919 16:35:49.375951   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:49.375812   13763 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a30}
	I0919 16:35:49.381466   13741 main.go:141] libmachine: (addons-897988) DBG | trying to create private KVM network mk-addons-897988 192.168.39.0/24...
	I0919 16:35:49.447339   13741 main.go:141] libmachine: (addons-897988) DBG | private KVM network mk-addons-897988 192.168.39.0/24 created
	I0919 16:35:49.447370   13741 main.go:141] libmachine: (addons-897988) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988 ...
	I0919 16:35:49.447385   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:49.447298   13763 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:35:49.447400   13741 main.go:141] libmachine: (addons-897988) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:35:49.447434   13741 main.go:141] libmachine: (addons-897988) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 16:35:49.670511   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:49.670395   13763 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa...
	I0919 16:35:50.070143   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:50.070010   13763 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/addons-897988.rawdisk...
	I0919 16:35:50.070177   13741 main.go:141] libmachine: (addons-897988) DBG | Writing magic tar header
	I0919 16:35:50.070194   13741 main.go:141] libmachine: (addons-897988) DBG | Writing SSH key tar header
	I0919 16:35:50.070209   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:50.070110   13763 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988 ...
	I0919 16:35:50.070225   13741 main.go:141] libmachine: (addons-897988) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988 (perms=drwx------)
	I0919 16:35:50.070241   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988
	I0919 16:35:50.070255   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 16:35:50.070264   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:35:50.070279   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 16:35:50.070294   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 16:35:50.070313   13741 main.go:141] libmachine: (addons-897988) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 16:35:50.070331   13741 main.go:141] libmachine: (addons-897988) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 16:35:50.070341   13741 main.go:141] libmachine: (addons-897988) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 16:35:50.070353   13741 main.go:141] libmachine: (addons-897988) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 16:35:50.070368   13741 main.go:141] libmachine: (addons-897988) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 16:35:50.070379   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home/jenkins
	I0919 16:35:50.070392   13741 main.go:141] libmachine: (addons-897988) DBG | Checking permissions on dir: /home
	I0919 16:35:50.070402   13741 main.go:141] libmachine: (addons-897988) DBG | Skipping /home - not owner
	I0919 16:35:50.070417   13741 main.go:141] libmachine: (addons-897988) Creating domain...
	I0919 16:35:50.071516   13741 main.go:141] libmachine: (addons-897988) define libvirt domain using xml: 
	I0919 16:35:50.071543   13741 main.go:141] libmachine: (addons-897988) <domain type='kvm'>
	I0919 16:35:50.071579   13741 main.go:141] libmachine: (addons-897988)   <name>addons-897988</name>
	I0919 16:35:50.071606   13741 main.go:141] libmachine: (addons-897988)   <memory unit='MiB'>4000</memory>
	I0919 16:35:50.071624   13741 main.go:141] libmachine: (addons-897988)   <vcpu>2</vcpu>
	I0919 16:35:50.071639   13741 main.go:141] libmachine: (addons-897988)   <features>
	I0919 16:35:50.071660   13741 main.go:141] libmachine: (addons-897988)     <acpi/>
	I0919 16:35:50.071679   13741 main.go:141] libmachine: (addons-897988)     <apic/>
	I0919 16:35:50.071690   13741 main.go:141] libmachine: (addons-897988)     <pae/>
	I0919 16:35:50.071699   13741 main.go:141] libmachine: (addons-897988)     
	I0919 16:35:50.071709   13741 main.go:141] libmachine: (addons-897988)   </features>
	I0919 16:35:50.071716   13741 main.go:141] libmachine: (addons-897988)   <cpu mode='host-passthrough'>
	I0919 16:35:50.071721   13741 main.go:141] libmachine: (addons-897988)   
	I0919 16:35:50.071729   13741 main.go:141] libmachine: (addons-897988)   </cpu>
	I0919 16:35:50.071735   13741 main.go:141] libmachine: (addons-897988)   <os>
	I0919 16:35:50.071744   13741 main.go:141] libmachine: (addons-897988)     <type>hvm</type>
	I0919 16:35:50.071753   13741 main.go:141] libmachine: (addons-897988)     <boot dev='cdrom'/>
	I0919 16:35:50.071758   13741 main.go:141] libmachine: (addons-897988)     <boot dev='hd'/>
	I0919 16:35:50.071767   13741 main.go:141] libmachine: (addons-897988)     <bootmenu enable='no'/>
	I0919 16:35:50.071773   13741 main.go:141] libmachine: (addons-897988)   </os>
	I0919 16:35:50.071781   13741 main.go:141] libmachine: (addons-897988)   <devices>
	I0919 16:35:50.071788   13741 main.go:141] libmachine: (addons-897988)     <disk type='file' device='cdrom'>
	I0919 16:35:50.071800   13741 main.go:141] libmachine: (addons-897988)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/boot2docker.iso'/>
	I0919 16:35:50.071811   13741 main.go:141] libmachine: (addons-897988)       <target dev='hdc' bus='scsi'/>
	I0919 16:35:50.071819   13741 main.go:141] libmachine: (addons-897988)       <readonly/>
	I0919 16:35:50.071827   13741 main.go:141] libmachine: (addons-897988)     </disk>
	I0919 16:35:50.071834   13741 main.go:141] libmachine: (addons-897988)     <disk type='file' device='disk'>
	I0919 16:35:50.071842   13741 main.go:141] libmachine: (addons-897988)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 16:35:50.071851   13741 main.go:141] libmachine: (addons-897988)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/addons-897988.rawdisk'/>
	I0919 16:35:50.071859   13741 main.go:141] libmachine: (addons-897988)       <target dev='hda' bus='virtio'/>
	I0919 16:35:50.071865   13741 main.go:141] libmachine: (addons-897988)     </disk>
	I0919 16:35:50.071876   13741 main.go:141] libmachine: (addons-897988)     <interface type='network'>
	I0919 16:35:50.071890   13741 main.go:141] libmachine: (addons-897988)       <source network='mk-addons-897988'/>
	I0919 16:35:50.071903   13741 main.go:141] libmachine: (addons-897988)       <model type='virtio'/>
	I0919 16:35:50.071909   13741 main.go:141] libmachine: (addons-897988)     </interface>
	I0919 16:35:50.071915   13741 main.go:141] libmachine: (addons-897988)     <interface type='network'>
	I0919 16:35:50.071930   13741 main.go:141] libmachine: (addons-897988)       <source network='default'/>
	I0919 16:35:50.071939   13741 main.go:141] libmachine: (addons-897988)       <model type='virtio'/>
	I0919 16:35:50.071946   13741 main.go:141] libmachine: (addons-897988)     </interface>
	I0919 16:35:50.071960   13741 main.go:141] libmachine: (addons-897988)     <serial type='pty'>
	I0919 16:35:50.071976   13741 main.go:141] libmachine: (addons-897988)       <target port='0'/>
	I0919 16:35:50.071990   13741 main.go:141] libmachine: (addons-897988)     </serial>
	I0919 16:35:50.071998   13741 main.go:141] libmachine: (addons-897988)     <console type='pty'>
	I0919 16:35:50.072005   13741 main.go:141] libmachine: (addons-897988)       <target type='serial' port='0'/>
	I0919 16:35:50.072013   13741 main.go:141] libmachine: (addons-897988)     </console>
	I0919 16:35:50.072019   13741 main.go:141] libmachine: (addons-897988)     <rng model='virtio'>
	I0919 16:35:50.072028   13741 main.go:141] libmachine: (addons-897988)       <backend model='random'>/dev/random</backend>
	I0919 16:35:50.072040   13741 main.go:141] libmachine: (addons-897988)     </rng>
	I0919 16:35:50.072057   13741 main.go:141] libmachine: (addons-897988)     
	I0919 16:35:50.072068   13741 main.go:141] libmachine: (addons-897988)     
	I0919 16:35:50.072079   13741 main.go:141] libmachine: (addons-897988)   </devices>
	I0919 16:35:50.072089   13741 main.go:141] libmachine: (addons-897988) </domain>
	I0919 16:35:50.072094   13741 main.go:141] libmachine: (addons-897988) 
	I0919 16:35:50.077845   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:60:b3:88 in network default
	I0919 16:35:50.078360   13741 main.go:141] libmachine: (addons-897988) Ensuring networks are active...
	I0919 16:35:50.078373   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:50.079062   13741 main.go:141] libmachine: (addons-897988) Ensuring network default is active
	I0919 16:35:50.079346   13741 main.go:141] libmachine: (addons-897988) Ensuring network mk-addons-897988 is active
	I0919 16:35:50.079816   13741 main.go:141] libmachine: (addons-897988) Getting domain xml...
	I0919 16:35:50.080441   13741 main.go:141] libmachine: (addons-897988) Creating domain...
	I0919 16:35:51.317548   13741 main.go:141] libmachine: (addons-897988) Waiting to get IP...
	I0919 16:35:51.318348   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:51.318725   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:51.318786   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:51.318723   13763 retry.go:31] will retry after 303.382974ms: waiting for machine to come up
	I0919 16:35:51.623360   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:51.623804   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:51.623830   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:51.623766   13763 retry.go:31] will retry after 263.37231ms: waiting for machine to come up
	I0919 16:35:51.889111   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:51.889463   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:51.889488   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:51.889422   13763 retry.go:31] will retry after 310.719772ms: waiting for machine to come up
	I0919 16:35:52.201865   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:52.202320   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:52.202346   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:52.202291   13763 retry.go:31] will retry after 590.871388ms: waiting for machine to come up
	I0919 16:35:52.794538   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:52.794985   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:52.795019   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:52.794978   13763 retry.go:31] will retry after 512.233676ms: waiting for machine to come up
	I0919 16:35:53.308697   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:53.309214   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:53.309249   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:53.309186   13763 retry.go:31] will retry after 733.362994ms: waiting for machine to come up
	I0919 16:35:54.043968   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:54.044344   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:54.044373   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:54.044281   13763 retry.go:31] will retry after 928.795057ms: waiting for machine to come up
	I0919 16:35:54.974665   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:54.975136   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:54.975162   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:54.975100   13763 retry.go:31] will retry after 1.027581921s: waiting for machine to come up
	I0919 16:35:56.004256   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:56.004706   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:56.004737   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:56.004647   13763 retry.go:31] will retry after 1.296208774s: waiting for machine to come up
	I0919 16:35:57.301859   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:57.302360   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:57.302395   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:57.302302   13763 retry.go:31] will retry after 2.31349818s: waiting for machine to come up
	I0919 16:35:59.617599   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:35:59.618028   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:35:59.618061   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:35:59.617935   13763 retry.go:31] will retry after 2.2139191s: waiting for machine to come up
	I0919 16:36:01.834226   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:01.834719   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:36:01.834755   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:36:01.834682   13763 retry.go:31] will retry after 2.791581872s: waiting for machine to come up
	I0919 16:36:04.628120   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:04.628519   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:36:04.628542   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:36:04.628460   13763 retry.go:31] will retry after 2.992995092s: waiting for machine to come up
	I0919 16:36:07.624481   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:07.624830   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find current IP address of domain addons-897988 in network mk-addons-897988
	I0919 16:36:07.624863   13741 main.go:141] libmachine: (addons-897988) DBG | I0919 16:36:07.624782   13763 retry.go:31] will retry after 4.516059007s: waiting for machine to come up
	I0919 16:36:12.143949   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.144441   13741 main.go:141] libmachine: (addons-897988) Found IP for machine: 192.168.39.206
	I0919 16:36:12.144470   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has current primary IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.144479   13741 main.go:141] libmachine: (addons-897988) Reserving static IP address...
	I0919 16:36:12.144899   13741 main.go:141] libmachine: (addons-897988) DBG | unable to find host DHCP lease matching {name: "addons-897988", mac: "52:54:00:01:2d:ba", ip: "192.168.39.206"} in network mk-addons-897988
	I0919 16:36:12.216788   13741 main.go:141] libmachine: (addons-897988) DBG | Getting to WaitForSSH function...
	I0919 16:36:12.216826   13741 main.go:141] libmachine: (addons-897988) Reserved static IP address: 192.168.39.206
	I0919 16:36:12.216842   13741 main.go:141] libmachine: (addons-897988) Waiting for SSH to be available...
	I0919 16:36:12.218732   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.219143   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.219173   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.219353   13741 main.go:141] libmachine: (addons-897988) DBG | Using SSH client type: external
	I0919 16:36:12.219390   13741 main.go:141] libmachine: (addons-897988) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa (-rw-------)
	I0919 16:36:12.219413   13741 main.go:141] libmachine: (addons-897988) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 16:36:12.219429   13741 main.go:141] libmachine: (addons-897988) DBG | About to run SSH command:
	I0919 16:36:12.219442   13741 main.go:141] libmachine: (addons-897988) DBG | exit 0
	I0919 16:36:12.316217   13741 main.go:141] libmachine: (addons-897988) DBG | SSH cmd err, output: <nil>: 
	I0919 16:36:12.316482   13741 main.go:141] libmachine: (addons-897988) KVM machine creation complete!
	I0919 16:36:12.316777   13741 main.go:141] libmachine: (addons-897988) Calling .GetConfigRaw
	I0919 16:36:12.317317   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:12.317525   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:12.317682   13741 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 16:36:12.317700   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:12.318784   13741 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 16:36:12.318797   13741 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 16:36:12.318803   13741 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 16:36:12.318810   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:12.320935   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.321338   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.321370   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.321455   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:12.321641   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.321786   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.321949   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:12.322111   13741 main.go:141] libmachine: Using SSH client type: native
	I0919 16:36:12.322480   13741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0919 16:36:12.322493   13741 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 16:36:12.435740   13741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:36:12.435763   13741 main.go:141] libmachine: Detecting the provisioner...
	I0919 16:36:12.435774   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:12.438190   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.438507   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.438533   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.438687   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:12.438855   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.439002   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.439094   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:12.439249   13741 main.go:141] libmachine: Using SSH client type: native
	I0919 16:36:12.439559   13741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0919 16:36:12.439571   13741 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 16:36:12.553124   13741 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 16:36:12.553242   13741 main.go:141] libmachine: found compatible host: buildroot
	I0919 16:36:12.553258   13741 main.go:141] libmachine: Provisioning with buildroot...
	I0919 16:36:12.553266   13741 main.go:141] libmachine: (addons-897988) Calling .GetMachineName
	I0919 16:36:12.553534   13741 buildroot.go:166] provisioning hostname "addons-897988"
	I0919 16:36:12.553555   13741 main.go:141] libmachine: (addons-897988) Calling .GetMachineName
	I0919 16:36:12.553749   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:12.556234   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.556555   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.556580   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.556783   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:12.557009   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.557168   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.557307   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:12.557454   13741 main.go:141] libmachine: Using SSH client type: native
	I0919 16:36:12.557985   13741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0919 16:36:12.558012   13741 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-897988 && echo "addons-897988" | sudo tee /etc/hostname
	I0919 16:36:12.680557   13741 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-897988
	
	I0919 16:36:12.680589   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:12.683188   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.683500   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.683540   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.683678   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:12.683856   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.684016   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:12.684142   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:12.684276   13741 main.go:141] libmachine: Using SSH client type: native
	I0919 16:36:12.684607   13741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0919 16:36:12.684625   13741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-897988' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-897988/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-897988' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 16:36:12.805351   13741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:36:12.805375   13741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 16:36:12.805410   13741 buildroot.go:174] setting up certificates
	I0919 16:36:12.805419   13741 provision.go:83] configureAuth start
	I0919 16:36:12.805432   13741 main.go:141] libmachine: (addons-897988) Calling .GetMachineName
	I0919 16:36:12.805719   13741 main.go:141] libmachine: (addons-897988) Calling .GetIP
	I0919 16:36:12.808338   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.808659   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.808693   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.808774   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:12.810805   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.811149   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:12.811183   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:12.811273   13741 provision.go:138] copyHostCerts
	I0919 16:36:12.811345   13741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 16:36:12.811499   13741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 16:36:12.811592   13741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 16:36:12.811671   13741 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.addons-897988 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube addons-897988]
	I0919 16:36:13.123373   13741 provision.go:172] copyRemoteCerts
	I0919 16:36:13.123430   13741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 16:36:13.123451   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:13.125996   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.126261   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.126285   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.126458   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:13.126657   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.126784   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:13.126910   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:13.214043   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 16:36:13.237526   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0919 16:36:13.260624   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 16:36:13.283477   13741 provision.go:86] duration metric: configureAuth took 478.043215ms
	I0919 16:36:13.283503   13741 buildroot.go:189] setting minikube options for container-runtime
	I0919 16:36:13.283687   13741 config.go:182] Loaded profile config "addons-897988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:36:13.283768   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:13.286519   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.286859   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.286888   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.287085   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:13.287273   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.287496   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.287662   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:13.287916   13741 main.go:141] libmachine: Using SSH client type: native
	I0919 16:36:13.288313   13741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0919 16:36:13.288332   13741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 16:36:13.589236   13741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 16:36:13.589267   13741 main.go:141] libmachine: Checking connection to Docker...
	I0919 16:36:13.589309   13741 main.go:141] libmachine: (addons-897988) Calling .GetURL
	I0919 16:36:13.590509   13741 main.go:141] libmachine: (addons-897988) DBG | Using libvirt version 6000000
	I0919 16:36:13.592478   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.592812   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.592845   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.592929   13741 main.go:141] libmachine: Docker is up and running!
	I0919 16:36:13.592960   13741 main.go:141] libmachine: Reticulating splines...
	I0919 16:36:13.592968   13741 client.go:171] LocalClient.Create took 24.503939403s
	I0919 16:36:13.592991   13741 start.go:167] duration metric: libmachine.API.Create for "addons-897988" took 24.504002s
	I0919 16:36:13.593001   13741 start.go:300] post-start starting for "addons-897988" (driver="kvm2")
	I0919 16:36:13.593009   13741 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 16:36:13.593025   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:13.593270   13741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 16:36:13.593291   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:13.595220   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.595531   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.595565   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.595745   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:13.595929   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.596068   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:13.596212   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:13.682693   13741 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 16:36:13.687330   13741 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 16:36:13.687355   13741 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 16:36:13.687439   13741 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 16:36:13.687470   13741 start.go:303] post-start completed in 94.462617ms
	I0919 16:36:13.687508   13741 main.go:141] libmachine: (addons-897988) Calling .GetConfigRaw
	I0919 16:36:13.688009   13741 main.go:141] libmachine: (addons-897988) Calling .GetIP
	I0919 16:36:13.690400   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.690703   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.690732   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.690967   13741 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/config.json ...
	I0919 16:36:13.691127   13741 start.go:128] duration metric: createHost completed in 24.619089413s
	I0919 16:36:13.691147   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:13.693130   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.693477   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.693499   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.693618   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:13.693796   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.693926   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.694062   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:13.694187   13741 main.go:141] libmachine: Using SSH client type: native
	I0919 16:36:13.694602   13741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0919 16:36:13.694617   13741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 16:36:13.805015   13741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695141373.773937152
	
	I0919 16:36:13.805037   13741 fix.go:206] guest clock: 1695141373.773937152
	I0919 16:36:13.805047   13741 fix.go:219] Guest: 2023-09-19 16:36:13.773937152 +0000 UTC Remote: 2023-09-19 16:36:13.691137089 +0000 UTC m=+24.715327092 (delta=82.800063ms)
	I0919 16:36:13.805070   13741 fix.go:190] guest clock delta is within tolerance: 82.800063ms
	I0919 16:36:13.805078   13741 start.go:83] releasing machines lock for "addons-897988", held for 24.733115453s
	I0919 16:36:13.805114   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:13.805403   13741 main.go:141] libmachine: (addons-897988) Calling .GetIP
	I0919 16:36:13.807873   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.808238   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.808268   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.808462   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:13.808930   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:13.809125   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:13.809210   13741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 16:36:13.809260   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:13.809392   13741 ssh_runner.go:195] Run: cat /version.json
	I0919 16:36:13.809419   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:13.811825   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.812085   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.812115   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.812140   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.812338   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:13.812538   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.812607   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:13.812637   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:13.812697   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:13.812841   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:13.812850   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:13.813050   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:13.813179   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:13.813285   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:13.916297   13741 ssh_runner.go:195] Run: systemctl --version
	I0919 16:36:13.921948   13741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 16:36:14.080133   13741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 16:36:14.086210   13741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 16:36:14.086284   13741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 16:36:14.102988   13741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 16:36:14.103061   13741 start.go:469] detecting cgroup driver to use...
	I0919 16:36:14.103138   13741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 16:36:14.117270   13741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:36:14.130237   13741 docker.go:196] disabling cri-docker service (if available) ...
	I0919 16:36:14.130289   13741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 16:36:14.143213   13741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 16:36:14.156494   13741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 16:36:14.263962   13741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 16:36:14.381123   13741 docker.go:212] disabling docker service ...
	I0919 16:36:14.381196   13741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 16:36:14.394648   13741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 16:36:14.407088   13741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 16:36:14.522214   13741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 16:36:14.628463   13741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 16:36:14.641850   13741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:36:14.659145   13741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 16:36:14.659212   13741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:36:14.668274   13741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 16:36:14.668328   13741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:36:14.678476   13741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:36:14.688814   13741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:36:14.698323   13741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 16:36:14.709202   13741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 16:36:14.718380   13741 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 16:36:14.718449   13741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 16:36:14.731963   13741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 16:36:14.741375   13741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:36:14.846103   13741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 16:36:15.009592   13741 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 16:36:15.009680   13741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 16:36:15.015148   13741 start.go:537] Will wait 60s for crictl version
	I0919 16:36:15.015231   13741 ssh_runner.go:195] Run: which crictl
	I0919 16:36:15.019030   13741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 16:36:15.059217   13741 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 16:36:15.059317   13741 ssh_runner.go:195] Run: crio --version
	I0919 16:36:15.105291   13741 ssh_runner.go:195] Run: crio --version
	I0919 16:36:15.151391   13741 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 16:36:15.152712   13741 main.go:141] libmachine: (addons-897988) Calling .GetIP
	I0919 16:36:15.155222   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:15.155540   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:15.155579   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:15.155790   13741 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 16:36:15.159907   13741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:36:15.171267   13741 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:36:15.171334   13741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 16:36:15.205680   13741 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I0919 16:36:15.205735   13741 ssh_runner.go:195] Run: which lz4
	I0919 16:36:15.209554   13741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 16:36:15.213443   13741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 16:36:15.213469   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I0919 16:36:16.967670   13741 crio.go:444] Took 1.758151 seconds to copy over tarball
	I0919 16:36:16.967740   13741 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 16:36:20.041396   13741 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.073629753s)
	I0919 16:36:20.041439   13741 crio.go:451] Took 3.073745 seconds to extract the tarball
	I0919 16:36:20.041453   13741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 16:36:20.084024   13741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 16:36:20.155956   13741 crio.go:496] all images are preloaded for cri-o runtime.
	I0919 16:36:20.155974   13741 cache_images.go:84] Images are preloaded, skipping loading
	I0919 16:36:20.156027   13741 ssh_runner.go:195] Run: crio config
	I0919 16:36:20.220090   13741 cni.go:84] Creating CNI manager for ""
	I0919 16:36:20.220107   13741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:36:20.220119   13741 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 16:36:20.220136   13741 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-897988 NodeName:addons-897988 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 16:36:20.220255   13741 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-897988"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 16:36:20.220315   13741 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-897988 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-897988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 16:36:20.220376   13741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 16:36:20.229198   13741 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 16:36:20.229259   13741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 16:36:20.237696   13741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0919 16:36:20.253718   13741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 16:36:20.270355   13741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0919 16:36:20.286356   13741 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0919 16:36:20.290052   13741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:36:20.302599   13741 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988 for IP: 192.168.39.206
	I0919 16:36:20.302630   13741 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:20.302754   13741 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 16:36:20.600973   13741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt ...
	I0919 16:36:20.600998   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt: {Name:mk78002ed41f03f67e383f27ccfdee53cddd458e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:20.601150   13741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key ...
	I0919 16:36:20.601161   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key: {Name:mkf9defc45a13d8417fa55d6738e52636cc0667e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:20.601226   13741 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 16:36:20.759653   13741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt ...
	I0919 16:36:20.759679   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt: {Name:mkd46b93105b2c9bb1be124cedc225daa2991286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:20.759820   13741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key ...
	I0919 16:36:20.759830   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key: {Name:mkde382e830c5d7641637337e629183599a9eece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:20.759920   13741 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.key
	I0919 16:36:20.759934   13741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt with IP's: []
	I0919 16:36:21.095187   13741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt ...
	I0919 16:36:21.095219   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: {Name:mk422030c51913f9ddb3cb34c73204a6a0fa0b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:21.095393   13741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.key ...
	I0919 16:36:21.095411   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.key: {Name:mkb0f7da87bccd2cd4b50f143443bafcc9da3afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:21.095505   13741 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.key.b548e89c
	I0919 16:36:21.095531   13741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 16:36:21.152241   13741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.crt.b548e89c ...
	I0919 16:36:21.152268   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.crt.b548e89c: {Name:mkf0fed7b37c881627d96fd41459d3ea1e8d7c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:21.152460   13741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.key.b548e89c ...
	I0919 16:36:21.152479   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.key.b548e89c: {Name:mkb645847dcfb362f7a29d7506a063536040141f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:21.152573   13741 certs.go:337] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.crt
	I0919 16:36:21.152660   13741 certs.go:341] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.key
	I0919 16:36:21.152718   13741 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.key
	I0919 16:36:21.152739   13741 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.crt with IP's: []
	I0919 16:36:21.282077   13741 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.crt ...
	I0919 16:36:21.282103   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.crt: {Name:mka35ecf8ed7f3b4755b741f575a477e822cdda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:21.282264   13741 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.key ...
	I0919 16:36:21.282277   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.key: {Name:mk95db8d277688ca8f29122a97bd50351fea0928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:21.282462   13741 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 16:36:21.282510   13741 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 16:36:21.282548   13741 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 16:36:21.282572   13741 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 16:36:21.283122   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 16:36:21.307467   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 16:36:21.330504   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 16:36:21.353514   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 16:36:21.377143   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 16:36:21.400264   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 16:36:21.423398   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 16:36:21.447032   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 16:36:21.473249   13741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 16:36:21.496822   13741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 16:36:21.512927   13741 ssh_runner.go:195] Run: openssl version
	I0919 16:36:21.518440   13741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 16:36:21.528670   13741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:36:21.533235   13741 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:36:21.533282   13741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:36:21.538655   13741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 16:36:21.548142   13741 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 16:36:21.552157   13741 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:36:21.552210   13741 kubeadm.go:404] StartCluster: {Name:addons-897988 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:addons-897988 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:36:21.552290   13741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 16:36:21.552337   13741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 16:36:21.590974   13741 cri.go:89] found id: ""
	I0919 16:36:21.591055   13741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 16:36:21.600750   13741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 16:36:21.609646   13741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 16:36:21.618490   13741 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 16:36:21.618533   13741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 16:36:21.673516   13741 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 16:36:21.673641   13741 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 16:36:21.807287   13741 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 16:36:21.807429   13741 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 16:36:21.807539   13741 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 16:36:22.037784   13741 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 16:36:22.083204   13741 out.go:204]   - Generating certificates and keys ...
	I0919 16:36:22.083312   13741 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 16:36:22.083393   13741 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 16:36:22.167870   13741 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 16:36:22.354686   13741 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 16:36:22.575950   13741 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 16:36:22.860963   13741 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 16:36:23.328555   13741 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 16:36:23.328873   13741 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-897988 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0919 16:36:23.711236   13741 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 16:36:23.711431   13741 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-897988 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0919 16:36:23.963087   13741 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 16:36:24.381409   13741 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 16:36:24.572095   13741 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 16:36:24.572378   13741 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 16:36:24.766676   13741 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 16:36:25.052693   13741 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 16:36:25.270455   13741 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 16:36:25.525256   13741 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 16:36:25.525889   13741 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 16:36:25.528118   13741 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 16:36:25.530501   13741 out.go:204]   - Booting up control plane ...
	I0919 16:36:25.530645   13741 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 16:36:25.530760   13741 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 16:36:25.531330   13741 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 16:36:25.548619   13741 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:36:25.549526   13741 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:36:25.549595   13741 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 16:36:25.667430   13741 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 16:36:33.164968   13741 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502512 seconds
	I0919 16:36:33.165108   13741 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 16:36:33.179249   13741 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 16:36:33.710879   13741 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 16:36:33.711113   13741 kubeadm.go:322] [mark-control-plane] Marking the node addons-897988 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 16:36:34.224760   13741 kubeadm.go:322] [bootstrap-token] Using token: vwiotq.znkx33lygpesoznq
	I0919 16:36:34.226314   13741 out.go:204]   - Configuring RBAC rules ...
	I0919 16:36:34.226467   13741 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 16:36:34.231772   13741 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 16:36:34.243859   13741 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 16:36:34.247741   13741 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 16:36:34.255115   13741 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 16:36:34.259898   13741 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 16:36:34.285810   13741 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 16:36:34.547146   13741 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 16:36:34.639405   13741 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 16:36:34.639427   13741 kubeadm.go:322] 
	I0919 16:36:34.639511   13741 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 16:36:34.639525   13741 kubeadm.go:322] 
	I0919 16:36:34.639649   13741 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 16:36:34.639671   13741 kubeadm.go:322] 
	I0919 16:36:34.639704   13741 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 16:36:34.639785   13741 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 16:36:34.639859   13741 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 16:36:34.639868   13741 kubeadm.go:322] 
	I0919 16:36:34.639941   13741 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 16:36:34.639951   13741 kubeadm.go:322] 
	I0919 16:36:34.640026   13741 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 16:36:34.640033   13741 kubeadm.go:322] 
	I0919 16:36:34.640107   13741 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 16:36:34.640210   13741 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 16:36:34.640309   13741 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 16:36:34.640322   13741 kubeadm.go:322] 
	I0919 16:36:34.640460   13741 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 16:36:34.640576   13741 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 16:36:34.640601   13741 kubeadm.go:322] 
	I0919 16:36:34.640706   13741 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vwiotq.znkx33lygpesoznq \
	I0919 16:36:34.640852   13741 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 16:36:34.640899   13741 kubeadm.go:322] 	--control-plane 
	I0919 16:36:34.640909   13741 kubeadm.go:322] 
	I0919 16:36:34.641003   13741 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 16:36:34.641012   13741 kubeadm.go:322] 
	I0919 16:36:34.641140   13741 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vwiotq.znkx33lygpesoznq \
	I0919 16:36:34.641291   13741 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 16:36:34.643940   13741 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:36:34.643971   13741 cni.go:84] Creating CNI manager for ""
	I0919 16:36:34.643981   13741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:36:34.646609   13741 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 16:36:34.647984   13741 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 16:36:34.671188   13741 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 16:36:34.750076   13741 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 16:36:34.750162   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:34.750189   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=addons-897988 minikube.k8s.io/updated_at=2023_09_19T16_36_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:34.971714   13741 ops.go:34] apiserver oom_adj: -16
	I0919 16:36:34.971804   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:35.088689   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:35.684611   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:36.184460   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:36.684649   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:37.184510   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:37.684335   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:38.184041   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:38.684985   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:39.184343   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:39.684014   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:40.184714   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:40.684442   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:41.184469   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:41.684667   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:42.184828   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:42.684987   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:43.184622   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:43.684054   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:44.184360   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:44.684054   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:45.184786   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:45.684034   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:46.184248   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:46.684444   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:47.184761   13741 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:36:47.313742   13741 kubeadm.go:1081] duration metric: took 12.563642892s to wait for elevateKubeSystemPrivileges.
	I0919 16:36:47.313782   13741 kubeadm.go:406] StartCluster complete in 25.761575431s
	I0919 16:36:47.313805   13741 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:47.313950   13741 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:36:47.314450   13741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:36:47.314714   13741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 16:36:47.314767   13741 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0919 16:36:47.314878   13741 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-897988"
	I0919 16:36:47.314891   13741 addons.go:69] Setting ingress=true in profile "addons-897988"
	I0919 16:36:47.314891   13741 config.go:182] Loaded profile config "addons-897988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:36:47.314902   13741 addons.go:69] Setting ingress-dns=true in profile "addons-897988"
	I0919 16:36:47.314908   13741 addons.go:231] Setting addon ingress=true in "addons-897988"
	I0919 16:36:47.314921   13741 addons.go:231] Setting addon ingress-dns=true in "addons-897988"
	I0919 16:36:47.314920   13741 addons.go:69] Setting metrics-server=true in profile "addons-897988"
	I0919 16:36:47.314925   13741 addons.go:69] Setting inspektor-gadget=true in profile "addons-897988"
	I0919 16:36:47.314929   13741 addons.go:69] Setting cloud-spanner=true in profile "addons-897988"
	I0919 16:36:47.314943   13741 addons.go:231] Setting addon metrics-server=true in "addons-897988"
	I0919 16:36:47.314939   13741 addons.go:69] Setting registry=true in profile "addons-897988"
	I0919 16:36:47.314893   13741 addons.go:69] Setting default-storageclass=true in profile "addons-897988"
	I0919 16:36:47.314964   13741 addons.go:69] Setting storage-provisioner=true in profile "addons-897988"
	I0919 16:36:47.314971   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.314979   13741 addons.go:231] Setting addon storage-provisioner=true in "addons-897988"
	I0919 16:36:47.314983   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.314980   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.315032   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.314923   13741 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-897988"
	I0919 16:36:47.315074   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.314971   13741 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-897988"
	I0919 16:36:47.314878   13741 addons.go:69] Setting volumesnapshots=true in profile "addons-897988"
	I0919 16:36:47.315139   13741 addons.go:231] Setting addon volumesnapshots=true in "addons-897988"
	I0919 16:36:47.315188   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.315434   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.315464   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.315466   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.315466   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.315471   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.315486   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.314886   13741 addons.go:69] Setting helm-tiller=true in profile "addons-897988"
	I0919 16:36:47.315502   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.314966   13741 addons.go:231] Setting addon registry=true in "addons-897988"
	I0919 16:36:47.314880   13741 addons.go:69] Setting gcp-auth=true in profile "addons-897988"
	I0919 16:36:47.315521   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.315524   13741 mustload.go:65] Loading cluster: addons-897988
	I0919 16:36:47.315537   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.315520   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.315502   13741 addons.go:231] Setting addon helm-tiller=true in "addons-897988"
	I0919 16:36:47.315555   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.314954   13741 addons.go:231] Setting addon cloud-spanner=true in "addons-897988"
	I0919 16:36:47.314952   13741 addons.go:231] Setting addon inspektor-gadget=true in "addons-897988"
	I0919 16:36:47.315538   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.315686   13741 config.go:182] Loaded profile config "addons-897988": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:36:47.315705   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.315749   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.316004   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.316019   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.316029   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.316038   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.316076   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.316093   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.316103   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.316226   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.316381   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.316395   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.316434   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.316619   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.316648   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.316832   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.335832   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33271
	I0919 16:36:47.335927   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38917
	I0919 16:36:47.336164   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0919 16:36:47.336353   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.336427   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45393
	I0919 16:36:47.336488   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.336561   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.336969   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.337090   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.337098   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.337111   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.337114   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.337126   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.337140   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.336978   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I0919 16:36:47.337473   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.337518   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.337537   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.337557   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.337697   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.337767   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0919 16:36:47.337860   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.337908   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.338072   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.338091   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.338155   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.338163   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.338200   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.338230   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.338264   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.338463   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.338504   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.338749   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.338771   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.338795   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.339034   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.339433   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.339470   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.339535   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.339556   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.341360   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.341755   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.341790   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.358368   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I0919 16:36:47.359400   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.359943   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.359963   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.360321   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.363889   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0919 16:36:47.363898   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0919 16:36:47.363915   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0919 16:36:47.364301   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.364799   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.364971   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.365256   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.365266   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.365272   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.365282   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.365613   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.365792   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.366147   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.366183   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.366289   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.366326   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.366509   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0919 16:36:47.366509   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.366780   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.367162   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.367183   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.367597   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.367697   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.367719   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.367750   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.368129   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.368319   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.369183   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.371586   13741 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0919 16:36:47.372360   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I0919 16:36:47.373539   13741 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 16:36:47.373554   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 16:36:47.373571   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.369605   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.373691   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42431
	I0919 16:36:47.374245   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.375897   13741 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:36:47.374530   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.374763   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.376601   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.377404   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.377434   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.377453   13741 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:36:47.377469   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 16:36:47.377495   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.377501   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.377253   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.377684   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.377806   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.377852   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.377949   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.378376   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.378394   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.379804   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.379819   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.379881   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0919 16:36:47.380290   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.380537   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.381030   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.381070   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.381254   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0919 16:36:47.381462   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.381476   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.382014   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
	I0919 16:36:47.382389   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.382535   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.382671   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.382890   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.383169   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.383200   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.383301   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.383312   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.383516   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.383671   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.383782   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.383798   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.383920   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.384545   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.384561   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.384751   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.384870   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.385363   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.385397   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.385919   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.385940   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.386355   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.388590   13741 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0919 16:36:47.386847   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0919 16:36:47.390058   13741 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0919 16:36:47.390068   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 16:36:47.390084   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.393403   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.393978   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.394010   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.394173   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.394319   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.394454   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.394588   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.396775   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.397289   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.397305   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.397667   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.397844   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.399418   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.401329   13741 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 16:36:47.401339   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0919 16:36:47.401745   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.402720   13741 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 16:36:47.402734   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 16:36:47.402751   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.402356   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33813
	I0919 16:36:47.403349   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.403367   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.403905   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.404127   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.404323   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
	I0919 16:36:47.404867   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.405511   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.405528   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.405999   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.406230   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.406957   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.407621   13741 addons.go:231] Setting addon default-storageclass=true in "addons-897988"
	I0919 16:36:47.407661   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:47.408019   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.408054   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.408614   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0919 16:36:47.408783   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0919 16:36:47.408953   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.408965   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.409065   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.410745   13741 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0919 16:36:47.409373   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.410779   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.409462   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.409572   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36413
	I0919 16:36:47.409774   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.409827   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.409986   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.409380   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.412502   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.412549   13741 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0919 16:36:47.412698   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.413144   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.413957   13741 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0919 16:36:47.415740   13741 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0919 16:36:47.417299   13741 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 16:36:47.417319   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 16:36:47.415800   13741 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 16:36:47.417334   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0919 16:36:47.417336   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.417349   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.413220   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.413228   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.414113   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.414742   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.413199   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.417483   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.418072   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.418200   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.418211   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.418255   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.418377   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.418646   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.418843   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.419202   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.419410   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.420105   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.421955   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 16:36:47.423221   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.421523   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.424620   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 16:36:47.422232   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.422457   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.424666   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.422880   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.424693   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.422911   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.423513   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.426045   13741 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0919 16:36:47.423734   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.424739   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.424839   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.427475   13741 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 16:36:47.427688   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.429038   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.430109   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 16:36:47.430187   13741 out.go:177]   - Using image docker.io/registry:2.8.1
	I0919 16:36:47.430200   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 16:36:47.430364   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.430839   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45113
	I0919 16:36:47.432865   13741 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 16:36:47.432885   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 16:36:47.432901   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.431425   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 16:36:47.431442   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.431623   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.431644   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.431765   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.436442   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 16:36:47.438068   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 16:36:47.436680   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.436752   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.437107   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.437265   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.437713   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.441132   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 16:36:47.439763   13741 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0919 16:36:47.439779   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.439807   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.439822   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.439991   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.440010   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.442879   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.442898   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.444542   13741 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 16:36:47.444557   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0919 16:36:47.445814   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 16:36:47.443080   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.443094   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.443240   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.444574   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.448050   13741 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 16:36:47.446034   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.446135   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.447091   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:47.449189   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.449601   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 16:36:47.449613   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 16:36:47.449623   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.449628   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.449640   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.449676   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:47.449779   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.449952   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.450073   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.450205   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.453273   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.453838   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.453867   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.453979   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.454140   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.454275   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.454365   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.465279   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0919 16:36:47.465614   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:47.466017   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:47.466039   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:47.466350   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:47.466527   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:47.467915   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:47.468106   13741 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 16:36:47.468117   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 16:36:47.468129   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:47.471032   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.471471   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:47.471487   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:47.471751   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:47.471871   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:47.472003   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:47.472106   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:47.731585   13741 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 16:36:47.731601   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 16:36:47.795317   13741 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-897988" context rescaled to 1 replicas
	I0919 16:36:47.795352   13741 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 16:36:47.797063   13741 out.go:177] * Verifying Kubernetes components...
	I0919 16:36:47.798504   13741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:36:47.853874   13741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 16:36:47.933849   13741 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 16:36:47.933875   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 16:36:47.936646   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 16:36:47.939656   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:36:47.994761   13741 node_ready.go:35] waiting up to 6m0s for node "addons-897988" to be "Ready" ...
	I0919 16:36:48.034202   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 16:36:48.034225   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 16:36:48.048151   13741 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 16:36:48.048178   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 16:36:48.055840   13741 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 16:36:48.055868   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 16:36:48.064722   13741 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 16:36:48.064749   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 16:36:48.068859   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 16:36:48.071541   13741 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 16:36:48.071564   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 16:36:48.075510   13741 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 16:36:48.075531   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 16:36:48.076565   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 16:36:48.078807   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 16:36:48.132857   13741 node_ready.go:49] node "addons-897988" has status "Ready":"True"
	I0919 16:36:48.132886   13741 node_ready.go:38] duration metric: took 138.088754ms waiting for node "addons-897988" to be "Ready" ...
	I0919 16:36:48.132897   13741 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:36:48.155515   13741 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 16:36:48.155540   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 16:36:48.163709   13741 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 16:36:48.163729   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 16:36:48.207721   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 16:36:48.207742   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 16:36:48.228614   13741 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 16:36:48.228634   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 16:36:48.236477   13741 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace to be "Ready" ...
	I0919 16:36:48.242065   13741 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 16:36:48.242084   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 16:36:48.304609   13741 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 16:36:48.304628   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 16:36:48.305641   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 16:36:48.341742   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 16:36:48.374782   13741 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 16:36:48.374808   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 16:36:48.414314   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 16:36:48.414331   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 16:36:48.437387   13741 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 16:36:48.437409   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 16:36:48.451944   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 16:36:48.477434   13741 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 16:36:48.477453   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 16:36:48.528371   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 16:36:48.528390   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 16:36:48.559636   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 16:36:48.559660   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 16:36:48.625821   13741 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 16:36:48.625839   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 16:36:48.637546   13741 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 16:36:48.637564   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 16:36:48.683306   13741 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 16:36:48.683323   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 16:36:48.711595   13741 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 16:36:48.711615   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0919 16:36:48.736536   13741 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 16:36:48.736573   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 16:36:48.767793   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 16:36:48.782705   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 16:36:48.798122   13741 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 16:36:48.798146   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 16:36:48.872287   13741 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 16:36:48.872306   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 16:36:48.914121   13741 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 16:36:48.914141   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 16:36:48.947875   13741 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 16:36:48.947897   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 16:36:48.985083   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 16:36:50.851197   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:36:50.853216   13741 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.999303611s)
	I0919 16:36:50.853240   13741 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 16:36:53.216470   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:36:53.454627   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.517953404s)
	I0919 16:36:53.454682   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:53.454693   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:53.454962   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:53.454980   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:53.454991   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:53.455002   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:53.455230   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:53.455249   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:54.077187   13741 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 16:36:54.077222   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:54.079987   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:54.080361   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:54.080389   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:54.080560   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:54.080776   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:54.080940   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:54.081115   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:54.111625   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.171940323s)
	I0919 16:36:54.111656   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.04277133s)
	I0919 16:36:54.111670   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:54.111674   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:54.111683   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:54.111685   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:54.111959   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:54.111977   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:54.111986   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:54.111994   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:54.112053   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:54.112081   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:54.112094   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:54.112096   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:54.112221   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:54.112257   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:54.112298   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:54.112323   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:54.112559   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:54.112660   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:54.112677   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:54.112689   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:54.112702   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:54.113675   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:54.113708   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:54.113718   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:54.296475   13741 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 16:36:54.357745   13741 addons.go:231] Setting addon gcp-auth=true in "addons-897988"
	I0919 16:36:54.357812   13741 host.go:66] Checking if "addons-897988" exists ...
	I0919 16:36:54.358205   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:54.358253   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:54.372785   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41285
	I0919 16:36:54.373271   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:54.373703   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:54.373726   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:54.374113   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:54.374544   13741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:36:54.374589   13741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:36:54.388596   13741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0919 16:36:54.388966   13741 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:36:54.389374   13741 main.go:141] libmachine: Using API Version  1
	I0919 16:36:54.389393   13741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:36:54.389695   13741 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:36:54.389844   13741 main.go:141] libmachine: (addons-897988) Calling .GetState
	I0919 16:36:54.391339   13741 main.go:141] libmachine: (addons-897988) Calling .DriverName
	I0919 16:36:54.391541   13741 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 16:36:54.391564   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHHostname
	I0919 16:36:54.394280   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:54.394729   13741 main.go:141] libmachine: (addons-897988) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:2d:ba", ip: ""} in network mk-addons-897988: {Iface:virbr1 ExpiryTime:2023-09-19 17:36:05 +0000 UTC Type:0 Mac:52:54:00:01:2d:ba Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-897988 Clientid:01:52:54:00:01:2d:ba}
	I0919 16:36:54.394762   13741 main.go:141] libmachine: (addons-897988) DBG | domain addons-897988 has defined IP address 192.168.39.206 and MAC address 52:54:00:01:2d:ba in network mk-addons-897988
	I0919 16:36:54.394915   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHPort
	I0919 16:36:54.395087   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHKeyPath
	I0919 16:36:54.395261   13741 main.go:141] libmachine: (addons-897988) Calling .GetSSHUsername
	I0919 16:36:54.395415   13741 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/addons-897988/id_rsa Username:docker}
	I0919 16:36:55.429662   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:36:55.440102   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.363501367s)
	I0919 16:36:55.440147   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.440158   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.440164   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.361327259s)
	I0919 16:36:55.440196   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.440217   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.440267   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.134600771s)
	I0919 16:36:55.440291   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.440303   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.440397   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.098625888s)
	I0919 16:36:55.440455   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.440467   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.440499   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.988530009s)
	I0919 16:36:55.440521   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.440531   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.440625   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.672801077s)
	W0919 16:36:55.440657   13741 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 16:36:55.440680   13741 retry.go:31] will retry after 263.315164ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 16:36:55.440711   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.657973177s)
	I0919 16:36:55.440737   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.440752   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.443371   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.443377   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.443387   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.443399   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.443401   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.443406   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.443407   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.443416   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.443420   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.443428   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.443430   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.443425   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.443438   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.443440   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.443449   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.443457   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.443461   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.443467   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.443471   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.443476   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.443428   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.443449   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.443428   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.443497   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.443506   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.443514   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.443484   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.443547   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:55.443558   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:55.445441   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.445449   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.445459   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.445460   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.445468   13741 addons.go:467] Verifying addon registry=true in "addons-897988"
	I0919 16:36:55.445474   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.445477   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.445463   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.445501   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.445512   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.445503   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.445521   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.445530   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.445531   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.445537   13741 addons.go:467] Verifying addon metrics-server=true in "addons-897988"
	I0919 16:36:55.445541   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.448331   13741 out.go:177] * Verifying registry addon...
	I0919 16:36:55.445441   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:55.445674   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:55.449749   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:55.449761   13741 addons.go:467] Verifying addon ingress=true in "addons-897988"
	I0919 16:36:55.451567   13741 out.go:177] * Verifying ingress addon...
	I0919 16:36:55.450640   13741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 16:36:55.453591   13741 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 16:36:55.466230   13741 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 16:36:55.466245   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:55.469752   13741 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 16:36:55.469767   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:55.495133   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:55.495281   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:55.705173   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 16:36:56.009644   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:56.045468   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:56.508082   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:56.512661   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:56.584372   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.599237791s)
	I0919 16:36:56.584439   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:56.584454   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:56.584451   13741 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.192874353s)
	I0919 16:36:56.586138   13741 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0919 16:36:56.584726   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:56.584753   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:56.587548   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:56.587561   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:56.587571   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:56.588779   13741 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0919 16:36:56.587906   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:56.588824   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:56.588839   13741 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-897988"
	I0919 16:36:56.587934   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:56.590256   13741 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 16:36:56.592130   13741 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 16:36:56.592144   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 16:36:56.593036   13741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 16:36:56.640180   13741 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 16:36:56.640207   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:56.694886   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:56.792440   13741 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 16:36:56.792462   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 16:36:56.847732   13741 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 16:36:56.847757   13741 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0919 16:36:56.877475   13741 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 16:36:57.019847   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:57.022235   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:57.252111   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:57.529797   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:57.540324   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:57.710406   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:57.937665   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:36:58.017919   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:58.018898   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:58.217204   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:58.222599   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.517389859s)
	I0919 16:36:58.222642   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:58.222650   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:58.222932   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:58.222951   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:58.222966   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:58.222975   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:58.223273   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:58.223325   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:58.223339   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:58.510444   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:58.522002   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:58.668838   13741 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.791325443s)
	I0919 16:36:58.668881   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:58.668891   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:58.669168   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:58.669188   13741 main.go:141] libmachine: (addons-897988) DBG | Closing plugin on server side
	I0919 16:36:58.669189   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:58.669208   13741 main.go:141] libmachine: Making call to close driver server
	I0919 16:36:58.669218   13741 main.go:141] libmachine: (addons-897988) Calling .Close
	I0919 16:36:58.669418   13741 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:36:58.669435   13741 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:36:58.670886   13741 addons.go:467] Verifying addon gcp-auth=true in "addons-897988"
	I0919 16:36:58.672507   13741 out.go:177] * Verifying gcp-auth addon...
	I0919 16:36:58.674773   13741 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 16:36:58.688425   13741 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 16:36:58.688442   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:36:58.698943   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:36:58.716186   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:59.012031   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:59.017307   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:59.222381   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:36:59.234634   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:59.516035   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:36:59.516501   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:36:59.702947   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:36:59.704696   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:00.002270   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:00.002418   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:00.205315   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:00.207466   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:00.403534   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:00.501591   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:00.501807   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:00.701490   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:00.703603   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:00.999878   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:01.000213   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:01.199953   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:01.202755   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:01.502412   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:01.502543   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:01.700450   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:01.702562   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:02.001267   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:02.001354   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:02.208757   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:02.210030   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:02.504347   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:02.507479   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:02.703378   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:02.703819   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:02.907446   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:03.000329   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:03.008208   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:03.205646   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:03.206091   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:03.505211   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:03.506210   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:03.707645   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:03.712177   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:04.013043   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:04.013511   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:04.202359   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:04.207558   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:04.502384   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:04.502414   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:04.706849   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:04.708269   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:04.933941   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:05.009613   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:05.011226   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:05.204033   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:05.207774   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:05.507096   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:05.507114   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:05.701404   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:05.702861   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:06.002539   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:06.002825   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:06.214718   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:06.215348   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:06.501984   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:06.503748   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:06.704899   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:06.706474   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:07.010187   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:07.010242   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:07.203206   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:07.203246   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:07.404905   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:07.509067   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:07.509660   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:07.711665   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:07.731430   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:08.005597   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:08.006971   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:08.204488   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:08.207555   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:08.503663   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:08.507331   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:08.701550   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:08.707407   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:09.000861   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:09.004023   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:09.200983   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:09.203064   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:09.405596   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:09.501483   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:09.502564   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:09.703352   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:09.703933   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:10.001270   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:10.001444   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:10.203155   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:10.203722   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:10.500892   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:10.501536   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:10.701011   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:10.702899   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:11.001853   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:11.003114   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:11.202301   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:11.203065   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:11.502479   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:11.502614   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:11.700718   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:11.702817   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:11.901266   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:12.002817   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:12.002956   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:12.201713   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:12.211855   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:12.501878   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:12.503114   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:12.702314   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:12.705355   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:13.002298   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:13.002324   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:13.200040   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:13.202269   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:13.502102   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:13.502245   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:13.701290   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:13.702024   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:13.903275   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:14.002577   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:14.003064   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:14.200660   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:14.202321   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:14.502850   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:14.503227   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:14.700802   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:14.702535   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:15.001750   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:15.002290   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:15.201920   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:15.202349   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:15.501999   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:15.503313   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:15.702961   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:15.704622   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:15.999943   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:16.001310   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:16.200213   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:16.201915   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:16.400066   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:16.501286   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:16.502380   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:16.702866   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:16.703403   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:17.000739   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:17.001060   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:17.206801   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:17.210952   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:18.011010   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:18.011928   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:18.012107   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:18.014523   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:18.019735   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:18.019959   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:18.201062   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:18.203526   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:18.402940   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:18.502515   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:18.502567   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:18.700913   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:18.702781   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:19.002278   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:19.003028   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:19.202992   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:19.203184   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:19.501477   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:19.501616   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:19.701050   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:19.703105   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:20.003334   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:20.005456   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:20.201093   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:20.202623   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:20.503838   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:20.504935   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:20.701391   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:20.703141   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:20.902600   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:21.001508   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:21.001796   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:21.200532   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:21.202238   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:21.501349   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:21.503911   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:21.704758   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:21.705573   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:22.000179   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:22.001173   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:22.200241   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:22.202432   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:22.500818   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:22.501130   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:22.702186   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:22.702269   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:23.065271   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:23.065855   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:23.077461   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:23.200860   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:23.202672   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:23.500696   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:23.501841   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:23.701647   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:23.703512   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:24.001060   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:24.001986   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:24.201671   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:24.204130   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:24.502039   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:24.502169   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:24.703085   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:24.703430   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:25.002043   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:25.002453   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:25.200166   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:25.202287   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:25.400449   13741 pod_ready.go:102] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:37:25.510522   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:25.510594   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:25.700986   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:25.702891   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:26.000110   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:26.000803   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:26.202238   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:26.203625   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:26.501773   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:26.502719   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:26.702060   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:26.705960   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:27.000907   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:27.001394   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:27.342234   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:27.344424   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:27.531100   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:27.532856   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:27.701240   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:27.703662   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:27.903854   13741 pod_ready.go:92] pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace has status "Ready":"True"
	I0919 16:37:27.903880   13741 pod_ready.go:81] duration metric: took 39.667379311s waiting for pod "coredns-5dd5756b68-flkc5" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.903894   13741 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w42f5" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.906468   13741 pod_ready.go:97] error getting pod "coredns-5dd5756b68-w42f5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w42f5" not found
	I0919 16:37:27.906492   13741 pod_ready.go:81] duration metric: took 2.590008ms waiting for pod "coredns-5dd5756b68-w42f5" in "kube-system" namespace to be "Ready" ...
	E0919 16:37:27.906506   13741 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-w42f5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-w42f5" not found
	I0919 16:37:27.906515   13741 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.915059   13741 pod_ready.go:92] pod "etcd-addons-897988" in "kube-system" namespace has status "Ready":"True"
	I0919 16:37:27.915082   13741 pod_ready.go:81] duration metric: took 8.558765ms waiting for pod "etcd-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.915094   13741 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.923853   13741 pod_ready.go:92] pod "kube-apiserver-addons-897988" in "kube-system" namespace has status "Ready":"True"
	I0919 16:37:27.923878   13741 pod_ready.go:81] duration metric: took 8.775088ms waiting for pod "kube-apiserver-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.923891   13741 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.930090   13741 pod_ready.go:92] pod "kube-controller-manager-addons-897988" in "kube-system" namespace has status "Ready":"True"
	I0919 16:37:27.930113   13741 pod_ready.go:81] duration metric: took 6.211548ms waiting for pod "kube-controller-manager-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:27.930125   13741 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zd4qq" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:28.001994   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:28.003902   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:28.098071   13741 pod_ready.go:92] pod "kube-proxy-zd4qq" in "kube-system" namespace has status "Ready":"True"
	I0919 16:37:28.098094   13741 pod_ready.go:81] duration metric: took 167.960066ms waiting for pod "kube-proxy-zd4qq" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:28.098105   13741 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:28.200466   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:28.202018   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:28.498225   13741 pod_ready.go:92] pod "kube-scheduler-addons-897988" in "kube-system" namespace has status "Ready":"True"
	I0919 16:37:28.498248   13741 pod_ready.go:81] duration metric: took 400.134315ms waiting for pod "kube-scheduler-addons-897988" in "kube-system" namespace to be "Ready" ...
	I0919 16:37:28.498257   13741 pod_ready.go:38] duration metric: took 40.365345672s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:37:28.498277   13741 api_server.go:52] waiting for apiserver process to appear ...
	I0919 16:37:28.498324   13741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 16:37:28.504019   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:28.504298   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:28.528130   13741 api_server.go:72] duration metric: took 40.7327494s to wait for apiserver process to appear ...
	I0919 16:37:28.528149   13741 api_server.go:88] waiting for apiserver healthz status ...
	I0919 16:37:28.528170   13741 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I0919 16:37:28.533750   13741 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I0919 16:37:28.535090   13741 api_server.go:141] control plane version: v1.28.2
	I0919 16:37:28.535110   13741 api_server.go:131] duration metric: took 6.95539ms to wait for apiserver health ...
	I0919 16:37:28.535117   13741 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 16:37:28.700847   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:28.707796   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:28.738168   13741 system_pods.go:59] 17 kube-system pods found
	I0919 16:37:28.738200   13741 system_pods.go:61] "coredns-5dd5756b68-flkc5" [6ad4ab33-2651-43cd-9d4f-185d140dec27] Running
	I0919 16:37:28.738207   13741 system_pods.go:61] "csi-hostpath-attacher-0" [e85c9c7c-019b-452b-8a37-9cbff21fc4a4] Running
	I0919 16:37:28.738213   13741 system_pods.go:61] "csi-hostpath-resizer-0" [0caf6205-0b6f-454c-ab22-f2dc9e577eda] Running
	I0919 16:37:28.738223   13741 system_pods.go:61] "csi-hostpathplugin-fzbh9" [03ebb44b-1214-4a03-84ab-9e6197ee8f9e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 16:37:28.738231   13741 system_pods.go:61] "etcd-addons-897988" [c4a16b86-7876-438e-8883-0ce2c4e1b81a] Running
	I0919 16:37:28.738238   13741 system_pods.go:61] "kube-apiserver-addons-897988" [49ef5f8b-d33c-4b0f-bace-3cd121ab7b91] Running
	I0919 16:37:28.738245   13741 system_pods.go:61] "kube-controller-manager-addons-897988" [6a52ba3e-f468-4325-a022-2fa8f5769d66] Running
	I0919 16:37:28.738255   13741 system_pods.go:61] "kube-ingress-dns-minikube" [32e13698-283b-49d8-a578-e41da4746dd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 16:37:28.738267   13741 system_pods.go:61] "kube-proxy-zd4qq" [426ae7e6-aea9-44f6-b00e-1d3a5e93e5cc] Running
	I0919 16:37:28.738275   13741 system_pods.go:61] "kube-scheduler-addons-897988" [4e5ce02b-e46e-4b5d-abb9-2c8f22cd3bed] Running
	I0919 16:37:28.738290   13741 system_pods.go:61] "metrics-server-7c66d45ddc-l5q8s" [543e5e73-a2c2-45fa-a365-1c30b46a6ed9] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 16:37:28.738301   13741 system_pods.go:61] "registry-proxy-kw8ch" [7941393c-e3f2-4002-b948-b9ce20653d5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 16:37:28.738315   13741 system_pods.go:61] "registry-r8jt6" [f55a59ff-10e0-4243-a3ae-4c53d0872417] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 16:37:28.738330   13741 system_pods.go:61] "snapshot-controller-58dbcc7b99-bddl2" [1b04d2fd-49f8-4316-be98-e861cdf4987a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 16:37:28.738344   13741 system_pods.go:61] "snapshot-controller-58dbcc7b99-fcmn8" [f5c70436-bfae-4091-910d-7f1ba70a103d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 16:37:28.738355   13741 system_pods.go:61] "storage-provisioner" [aac4da53-a9fc-42b1-8e4a-f0020a1acaf5] Running
	I0919 16:37:28.738366   13741 system_pods.go:61] "tiller-deploy-7b677967b9-v8tgw" [a7a928d8-d956-4a81-a86d-bc13b9070b40] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 16:37:28.738378   13741 system_pods.go:74] duration metric: took 203.253498ms to wait for pod list to return data ...
	I0919 16:37:28.738392   13741 default_sa.go:34] waiting for default service account to be created ...
	I0919 16:37:28.909589   13741 default_sa.go:45] found service account: "default"
	I0919 16:37:28.909615   13741 default_sa.go:55] duration metric: took 171.214068ms for default service account to be created ...
	I0919 16:37:28.909623   13741 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 16:37:29.003752   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:29.003997   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:29.106108   13741 system_pods.go:86] 17 kube-system pods found
	I0919 16:37:29.106148   13741 system_pods.go:89] "coredns-5dd5756b68-flkc5" [6ad4ab33-2651-43cd-9d4f-185d140dec27] Running
	I0919 16:37:29.106158   13741 system_pods.go:89] "csi-hostpath-attacher-0" [e85c9c7c-019b-452b-8a37-9cbff21fc4a4] Running
	I0919 16:37:29.106166   13741 system_pods.go:89] "csi-hostpath-resizer-0" [0caf6205-0b6f-454c-ab22-f2dc9e577eda] Running
	I0919 16:37:29.106181   13741 system_pods.go:89] "csi-hostpathplugin-fzbh9" [03ebb44b-1214-4a03-84ab-9e6197ee8f9e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 16:37:29.106192   13741 system_pods.go:89] "etcd-addons-897988" [c4a16b86-7876-438e-8883-0ce2c4e1b81a] Running
	I0919 16:37:29.106209   13741 system_pods.go:89] "kube-apiserver-addons-897988" [49ef5f8b-d33c-4b0f-bace-3cd121ab7b91] Running
	I0919 16:37:29.106217   13741 system_pods.go:89] "kube-controller-manager-addons-897988" [6a52ba3e-f468-4325-a022-2fa8f5769d66] Running
	I0919 16:37:29.106234   13741 system_pods.go:89] "kube-ingress-dns-minikube" [32e13698-283b-49d8-a578-e41da4746dd0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 16:37:29.106245   13741 system_pods.go:89] "kube-proxy-zd4qq" [426ae7e6-aea9-44f6-b00e-1d3a5e93e5cc] Running
	I0919 16:37:29.106255   13741 system_pods.go:89] "kube-scheduler-addons-897988" [4e5ce02b-e46e-4b5d-abb9-2c8f22cd3bed] Running
	I0919 16:37:29.106270   13741 system_pods.go:89] "metrics-server-7c66d45ddc-l5q8s" [543e5e73-a2c2-45fa-a365-1c30b46a6ed9] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 16:37:29.106284   13741 system_pods.go:89] "registry-proxy-kw8ch" [7941393c-e3f2-4002-b948-b9ce20653d5a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 16:37:29.106297   13741 system_pods.go:89] "registry-r8jt6" [f55a59ff-10e0-4243-a3ae-4c53d0872417] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 16:37:29.106313   13741 system_pods.go:89] "snapshot-controller-58dbcc7b99-bddl2" [1b04d2fd-49f8-4316-be98-e861cdf4987a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 16:37:29.106327   13741 system_pods.go:89] "snapshot-controller-58dbcc7b99-fcmn8" [f5c70436-bfae-4091-910d-7f1ba70a103d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 16:37:29.106337   13741 system_pods.go:89] "storage-provisioner" [aac4da53-a9fc-42b1-8e4a-f0020a1acaf5] Running
	I0919 16:37:29.106349   13741 system_pods.go:89] "tiller-deploy-7b677967b9-v8tgw" [a7a928d8-d956-4a81-a86d-bc13b9070b40] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0919 16:37:29.106362   13741 system_pods.go:126] duration metric: took 196.732443ms to wait for k8s-apps to be running ...
	I0919 16:37:29.106375   13741 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 16:37:29.106438   13741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:37:29.146415   13741 system_svc.go:56] duration metric: took 40.027714ms WaitForService to wait for kubelet.
	I0919 16:37:29.146442   13741 kubeadm.go:581] duration metric: took 41.351065252s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 16:37:29.146469   13741 node_conditions.go:102] verifying NodePressure condition ...
	I0919 16:37:29.201382   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:29.206988   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:29.296948   13741 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 16:37:29.296991   13741 node_conditions.go:123] node cpu capacity is 2
	I0919 16:37:29.297001   13741 node_conditions.go:105] duration metric: took 150.527103ms to run NodePressure ...
	I0919 16:37:29.297012   13741 start.go:228] waiting for startup goroutines ...
	I0919 16:37:29.501448   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:29.505018   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:29.920386   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:29.921257   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:30.001263   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:30.002921   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:30.201016   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:30.203098   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:30.500585   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:30.503304   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:30.701876   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:30.702862   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:31.002068   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:31.002438   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:31.200579   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:31.202685   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:31.502664   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:31.502769   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:31.702971   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:31.706333   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:32.001979   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:32.002673   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:32.201138   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:32.204360   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:32.501737   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:32.503970   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:32.700600   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:32.702419   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:33.004521   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:33.005480   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:33.202326   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:33.206989   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:33.501551   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:33.501629   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:33.708070   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:33.709285   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:34.004466   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:34.004602   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:34.200982   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:34.206230   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:34.504383   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:34.516523   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:34.770392   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:34.770807   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:35.003853   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:35.008124   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:35.204671   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:35.223422   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:35.509675   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:35.510857   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:35.985856   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:35.986386   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:36.004115   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:36.004481   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:36.214647   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:36.220042   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:36.500631   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:36.501946   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:36.702395   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:36.702749   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:37.000699   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:37.001270   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:37.202752   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:37.204022   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:37.506391   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:37.507598   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:37.700908   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:37.703090   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:38.001658   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:38.001712   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:38.201889   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:38.204035   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:38.500677   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:38.501563   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:38.701484   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:38.703174   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:38.999776   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:39.001353   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:39.201878   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:39.202960   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:39.501509   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:39.502619   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:39.701455   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:39.703123   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:40.001600   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:40.001744   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:40.205537   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:40.205774   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:40.504495   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:40.504795   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:40.704047   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:40.704481   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:41.002314   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:41.002494   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:41.200178   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:41.206869   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:41.501256   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:41.501406   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:41.702629   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:41.702807   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:42.003339   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:42.003869   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:42.200373   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:42.202806   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:42.501891   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:42.502145   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:42.702533   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:42.703534   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:43.006797   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:43.008429   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:43.201913   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:43.207444   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:43.502927   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:43.503253   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:43.701945   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:43.705342   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:44.003107   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:44.004176   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:44.200877   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:44.203064   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:44.501066   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:44.502972   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:44.702013   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:44.704189   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:45.001100   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:45.001736   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:45.202727   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:45.206333   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:45.499679   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:45.500089   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:45.700546   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:45.702390   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:46.000585   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:46.002046   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:46.202789   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:46.205057   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:46.501084   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:46.503014   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:46.704214   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:46.704954   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:47.000001   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:47.001848   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:47.201064   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:47.203188   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:47.500772   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:47.501269   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:47.703785   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:47.707312   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:48.002505   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:48.002764   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:48.201467   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:48.203473   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:48.502701   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:48.503551   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:48.702588   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:48.703555   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:49.001202   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:49.004211   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:49.204011   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:49.206586   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:49.503066   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:49.504600   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:50.072588   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:50.073355   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:50.073453   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:50.073644   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:50.200530   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:50.202626   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:50.501416   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:50.502158   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:50.701313   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:50.702591   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:51.003344   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:51.003808   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:51.202497   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:51.203277   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:51.508209   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:51.508326   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:51.700179   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:51.703526   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:52.000118   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:52.002217   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:52.201475   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:52.203377   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:52.500628   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:52.500844   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 16:37:52.703956   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:52.704826   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:53.002200   13741 kapi.go:107] duration metric: took 57.551558678s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 16:37:53.002623   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:53.200776   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:53.202908   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:53.501761   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:53.701211   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:53.702985   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:54.286772   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:54.287352   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:54.288907   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:54.505180   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:54.701270   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:54.703884   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:55.001188   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:55.201633   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:55.203528   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:55.500237   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:55.702416   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:55.704678   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:56.001158   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:56.202607   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:56.204715   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:56.500970   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:56.702343   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:56.703179   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:57.000718   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:57.214331   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:57.218709   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:57.500890   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:57.701850   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:57.705196   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:58.004846   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:58.202121   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:58.202583   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:58.517300   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:58.701328   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:58.703035   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:59.000997   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:59.200210   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:59.202807   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:37:59.510433   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:37:59.702411   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:37:59.705035   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:00.015238   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:00.214131   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:00.220322   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:00.501338   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:00.709004   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:00.709335   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:01.001049   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:01.206889   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:01.210108   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:01.500790   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:01.702647   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:01.708587   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:02.001799   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:02.202450   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:02.206207   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:02.500949   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:02.701406   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:02.702915   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:03.001280   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:03.203432   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:03.204471   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:03.838885   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:03.841389   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:03.841765   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:04.019318   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:04.201094   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:04.202771   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:04.501273   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:04.728756   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:04.739486   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:05.015400   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:05.204856   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:05.215594   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:05.502809   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:05.702856   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:05.703836   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:06.000660   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:06.208932   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:06.210851   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:06.500281   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:06.701473   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:06.702013   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:07.000711   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:07.331710   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:07.334003   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:07.500427   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:07.702348   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:07.704490   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:08.000966   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:08.201582   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:08.203466   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:08.500118   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:08.701809   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:08.704988   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:09.000821   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:09.201601   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:09.204399   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:09.500462   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:09.726968   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:09.727481   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:10.115530   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:10.207197   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:10.214078   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:10.500895   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:10.700488   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:10.701777   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:11.008933   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:11.201574   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:11.207125   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:11.500753   13741 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 16:38:11.700890   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:11.706550   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:12.002344   13741 kapi.go:107] duration metric: took 1m16.54874757s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 16:38:12.201702   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:12.202420   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:12.701937   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:12.704041   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:13.201604   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:13.205259   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:13.703082   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:13.703813   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:14.201430   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:14.204236   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:14.703388   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:14.708957   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:15.201421   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:15.203483   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:15.703090   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:15.704105   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:16.203042   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:16.203676   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:16.701775   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:16.703608   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 16:38:17.203689   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:17.207916   13741 kapi.go:107] duration metric: took 1m18.533139807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 16:38:17.209691   13741 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-897988 cluster.
	I0919 16:38:17.211242   13741 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 16:38:17.212799   13741 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 16:38:17.704581   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:18.201265   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:18.700247   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:19.202561   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:19.702140   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:20.201862   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:20.700728   13741 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 16:38:21.201013   13741 kapi.go:107] duration metric: took 1m24.607978084s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 16:38:21.203148   13741 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, default-storageclass, helm-tiller, inspektor-gadget, metrics-server, cloud-spanner, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 16:38:21.204447   13741 addons.go:502] enable addons completed in 1m33.889684627s: enabled=[ingress-dns storage-provisioner default-storageclass helm-tiller inspektor-gadget metrics-server cloud-spanner volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 16:38:21.204484   13741 start.go:233] waiting for cluster config update ...
	I0919 16:38:21.204508   13741 start.go:242] writing updated cluster config ...
	I0919 16:38:21.204732   13741 ssh_runner.go:195] Run: rm -f paused
	I0919 16:38:21.253318   13741 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 16:38:21.255039   13741 out.go:177] * Done! kubectl is now configured to use "addons-897988" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 16:36:02 UTC, ends at Tue 2023-09-19 16:41:13 UTC. --
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.344872573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695141673344855472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484777,},InodesUsed:&UInt64Value{Value:202,},},},}" file="go-grpc-middleware/chain.go:25" id=d87c6e84-6f8b-46ef-a973-e6109da97e36 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.345555920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4721b371-93ae-4ca8-957c-17374c240b5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.345603671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4721b371-93ae-4ca8-957c-17374c240b5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.345894441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21af637b27f2444f65f96c1b4a1d2c3e3401c8fde5a7ab00fc0e9dcf571981c2,PodSandboxId:c125e1c24bbd30f0c0209db7f84c65f3e9150688c4f0b8bdbfd14f4d1b589f86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695141665906328194,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jd6cs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3532a0d3-1e5b-40e7-b77b-828f3ba927c7,},Annotations:map[string]string{io.kubernetes.container.hash: 5552bb64,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baaea065a819e8c6f9f5255ea135aad314ac8c49ccc585b4402ffdbe1954936b,PodSandboxId:b9744714d7ee18e1dd5625c40a51b0392a8537e0086448587ab57205fbf902f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695141526002219873,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ac2608-6ed2-49a6-97a1-9efaf2f4b32d,},Annotations:map[string]string{io.kubernet
es.container.hash: 96e1c392,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e558e16ec3ed0a5129b5db1dcaf399f3a5f682dbf5f2edd89cb75cd389b4d2,PodSandboxId:a641abdceb01065c37599f25a5ef5f894830853fd8a18417403f57a9946c29d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1695141518139448553,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-zm88j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 960afd60-f5d7-4309-8163-df6fb3d4fb88,},Annotations:map[string]string{io.kubernetes.container.hash: e0a17fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2100e21b58ac1f64330495e736b4292b0d3a3a107b17774056508246c462a2a4,PodSandboxId:b86b674149d966870499ce7d54072ae118d797bb31d3097378f303257a24e6fe,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141497700447218,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ing
ress-nginx-admission-patch-9cvjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 48b949bf-cead-485f-b08f-9359f9a13a28,},Annotations:map[string]string{io.kubernetes.container.hash: 3c1418f0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433f3b8e04289102708cae4fa4e1422c0dc9d8c43ba96a204bc8312e14f42212,PodSandboxId:8f7c52695c1f12310db8c5ba6ece0d380c5bebcf5a92df7764d80058d1f4f0f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1695141495930345166,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-xx2pf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 34f5fdfd-9dd9-4991-9b6d-c3bc2303f567,},Annotations:map[string]string{io.kubernetes.container.hash: 165d56ef,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920b249a69ea9bc2d241989861d47ef9234b0cc0f3cba5f6d30c6a1cb2209efb,PodSandboxId:c4b5e60bc45c508e4c7c6619861b379ecfb53306ca91741be7bc4b1627668331,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf8
0f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141474495588280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7ncj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6515d5b-40f8-4027-afc0-1f13e4efeb11,},Annotations:map[string]string{io.kubernetes.container.hash: d9af3d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22040a3d53b536dbc444b11b700a40bf06541608da2570a4ed934880f0acc0a5,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695141456004191376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be5d58f9e15674a5efae4e29a2dac121525f092c795b6a44ca80db8a6709adf,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695141424327902873,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e30fa2c0a41d85334abde832ab9a45c8682d0e1080cc89efe0421743f2e89d8,PodSandboxId:478906674d5438c4c15be162df7aaa86a58441a9b049587e93032c837350f35e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State
:CONTAINER_RUNNING,CreatedAt:1695141422192629312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zd4qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 426ae7e6-aea9-44f6-b00e-1d3a5e93e5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 812ef02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7abf9e9e4c8c56f30fa6f917aa04ca6973596ce647e958988e66277d4630a273,PodSandboxId:5bcdae46baa299359c4f732538e7181416adce62b94ed67f1bde61667b6461ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695
141412675214757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-flkc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad4ab33-2651-43cd-9d4f-185d140dec27,},Annotations:map[string]string{io.kubernetes.container.hash: 74d3fc2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0977df8ca96ce6b385ce2f8b552dc3ff6bf09a9073415dca148849f4322784a0,PodSandboxId:544b79188eb8d1b0af7c74d932b4e1e796386138852b6a9a957d898c83e76f1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e
6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695141387642443241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807d8666e3be11f3f5f5702f8ca8bba2,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbbdd1dfb9b0d10890df1eadbdf691a4dade526a2b8bd7dd981048d7c1329b7,PodSandboxId:515f750642b84241c9ea2bc5236e93a43e6bdc2f6df1e2e08fe220ed7b9b6935,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde
94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695141387401597229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba43d6c4ce89b42aa3dbd2fb38b3020,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ab4eb8ba0cd90742a9c350d4963030440cd7edcc8cf3b0374e22638117bc21,PodSandboxId:558681aa2031f1cd46f12585ad3ab53328865a62d985faba90b7703201c6f586,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2f
d2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695141387422793515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf8666c162eedd8ee6f0ad3b40dd753f,},Annotations:map[string]string{io.kubernetes.container.hash: 5da19477,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e21e23e1c11acf4d0ea8c57cea1bcef976233d3cad2f032d3ebf8c1cc3a1fb,PodSandboxId:85f2df22071cd738e3341b91c0e6d83612e896b4e756b235e0a19e52aece0126,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string
{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695141387238388014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a67ccb0c23db0fb8482528a3f30ae2,},Annotations:map[string]string{io.kubernetes.container.hash: eec86e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4721b371-93ae-4ca8-957c-17374c240b5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.382842545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=04909ebf-c53e-4f9b-b082-dcc7c9d2c813 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.382966498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=04909ebf-c53e-4f9b-b082-dcc7c9d2c813 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.385045864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=375f70ad-cb1a-4fa6-b815-2759a7f1bcd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.387178843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695141673387134988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484777,},InodesUsed:&UInt64Value{Value:202,},},},}" file="go-grpc-middleware/chain.go:25" id=375f70ad-cb1a-4fa6-b815-2759a7f1bcd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.388223573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1f29c7b7-8c72-4b7b-b019-4a449618d3a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.388356101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1f29c7b7-8c72-4b7b-b019-4a449618d3a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.388907994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21af637b27f2444f65f96c1b4a1d2c3e3401c8fde5a7ab00fc0e9dcf571981c2,PodSandboxId:c125e1c24bbd30f0c0209db7f84c65f3e9150688c4f0b8bdbfd14f4d1b589f86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695141665906328194,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jd6cs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3532a0d3-1e5b-40e7-b77b-828f3ba927c7,},Annotations:map[string]string{io.kubernetes.container.hash: 5552bb64,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baaea065a819e8c6f9f5255ea135aad314ac8c49ccc585b4402ffdbe1954936b,PodSandboxId:b9744714d7ee18e1dd5625c40a51b0392a8537e0086448587ab57205fbf902f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695141526002219873,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ac2608-6ed2-49a6-97a1-9efaf2f4b32d,},Annotations:map[string]string{io.kubernet
es.container.hash: 96e1c392,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e558e16ec3ed0a5129b5db1dcaf399f3a5f682dbf5f2edd89cb75cd389b4d2,PodSandboxId:a641abdceb01065c37599f25a5ef5f894830853fd8a18417403f57a9946c29d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1695141518139448553,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-zm88j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 960afd60-f5d7-4309-8163-df6fb3d4fb88,},Annotations:map[string]string{io.kubernetes.container.hash: e0a17fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2100e21b58ac1f64330495e736b4292b0d3a3a107b17774056508246c462a2a4,PodSandboxId:b86b674149d966870499ce7d54072ae118d797bb31d3097378f303257a24e6fe,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141497700447218,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ing
ress-nginx-admission-patch-9cvjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 48b949bf-cead-485f-b08f-9359f9a13a28,},Annotations:map[string]string{io.kubernetes.container.hash: 3c1418f0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433f3b8e04289102708cae4fa4e1422c0dc9d8c43ba96a204bc8312e14f42212,PodSandboxId:8f7c52695c1f12310db8c5ba6ece0d380c5bebcf5a92df7764d80058d1f4f0f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1695141495930345166,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-xx2pf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 34f5fdfd-9dd9-4991-9b6d-c3bc2303f567,},Annotations:map[string]string{io.kubernetes.container.hash: 165d56ef,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920b249a69ea9bc2d241989861d47ef9234b0cc0f3cba5f6d30c6a1cb2209efb,PodSandboxId:c4b5e60bc45c508e4c7c6619861b379ecfb53306ca91741be7bc4b1627668331,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf8
0f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141474495588280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7ncj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6515d5b-40f8-4027-afc0-1f13e4efeb11,},Annotations:map[string]string{io.kubernetes.container.hash: d9af3d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22040a3d53b536dbc444b11b700a40bf06541608da2570a4ed934880f0acc0a5,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695141456004191376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be5d58f9e15674a5efae4e29a2dac121525f092c795b6a44ca80db8a6709adf,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695141424327902873,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e30fa2c0a41d85334abde832ab9a45c8682d0e1080cc89efe0421743f2e89d8,PodSandboxId:478906674d5438c4c15be162df7aaa86a58441a9b049587e93032c837350f35e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State
:CONTAINER_RUNNING,CreatedAt:1695141422192629312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zd4qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 426ae7e6-aea9-44f6-b00e-1d3a5e93e5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 812ef02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7abf9e9e4c8c56f30fa6f917aa04ca6973596ce647e958988e66277d4630a273,PodSandboxId:5bcdae46baa299359c4f732538e7181416adce62b94ed67f1bde61667b6461ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695
141412675214757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-flkc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad4ab33-2651-43cd-9d4f-185d140dec27,},Annotations:map[string]string{io.kubernetes.container.hash: 74d3fc2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0977df8ca96ce6b385ce2f8b552dc3ff6bf09a9073415dca148849f4322784a0,PodSandboxId:544b79188eb8d1b0af7c74d932b4e1e796386138852b6a9a957d898c83e76f1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e
6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695141387642443241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807d8666e3be11f3f5f5702f8ca8bba2,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbbdd1dfb9b0d10890df1eadbdf691a4dade526a2b8bd7dd981048d7c1329b7,PodSandboxId:515f750642b84241c9ea2bc5236e93a43e6bdc2f6df1e2e08fe220ed7b9b6935,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde
94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695141387401597229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba43d6c4ce89b42aa3dbd2fb38b3020,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ab4eb8ba0cd90742a9c350d4963030440cd7edcc8cf3b0374e22638117bc21,PodSandboxId:558681aa2031f1cd46f12585ad3ab53328865a62d985faba90b7703201c6f586,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2f
d2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695141387422793515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf8666c162eedd8ee6f0ad3b40dd753f,},Annotations:map[string]string{io.kubernetes.container.hash: 5da19477,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e21e23e1c11acf4d0ea8c57cea1bcef976233d3cad2f032d3ebf8c1cc3a1fb,PodSandboxId:85f2df22071cd738e3341b91c0e6d83612e896b4e756b235e0a19e52aece0126,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string
{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695141387238388014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a67ccb0c23db0fb8482528a3f30ae2,},Annotations:map[string]string{io.kubernetes.container.hash: eec86e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f29c7b7-8c72-4b7b-b019-4a449618d3a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.428629863Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3cd14e09-1bd5-422c-b388-2ab5fc0cac70 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.428685957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3cd14e09-1bd5-422c-b388-2ab5fc0cac70 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.429802771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2d56b3fe-60b8-48d6-a887-81947bbff21b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.431056083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695141673431037578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484777,},InodesUsed:&UInt64Value{Value:202,},},},}" file="go-grpc-middleware/chain.go:25" id=2d56b3fe-60b8-48d6-a887-81947bbff21b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.431866608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9aeb418-f1ee-42f6-a770-170791e9a203 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.431915273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9aeb418-f1ee-42f6-a770-170791e9a203 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.432224436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21af637b27f2444f65f96c1b4a1d2c3e3401c8fde5a7ab00fc0e9dcf571981c2,PodSandboxId:c125e1c24bbd30f0c0209db7f84c65f3e9150688c4f0b8bdbfd14f4d1b589f86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695141665906328194,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jd6cs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3532a0d3-1e5b-40e7-b77b-828f3ba927c7,},Annotations:map[string]string{io.kubernetes.container.hash: 5552bb64,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baaea065a819e8c6f9f5255ea135aad314ac8c49ccc585b4402ffdbe1954936b,PodSandboxId:b9744714d7ee18e1dd5625c40a51b0392a8537e0086448587ab57205fbf902f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695141526002219873,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ac2608-6ed2-49a6-97a1-9efaf2f4b32d,},Annotations:map[string]string{io.kubernet
es.container.hash: 96e1c392,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e558e16ec3ed0a5129b5db1dcaf399f3a5f682dbf5f2edd89cb75cd389b4d2,PodSandboxId:a641abdceb01065c37599f25a5ef5f894830853fd8a18417403f57a9946c29d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1695141518139448553,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-zm88j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 960afd60-f5d7-4309-8163-df6fb3d4fb88,},Annotations:map[string]string{io.kubernetes.container.hash: e0a17fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2100e21b58ac1f64330495e736b4292b0d3a3a107b17774056508246c462a2a4,PodSandboxId:b86b674149d966870499ce7d54072ae118d797bb31d3097378f303257a24e6fe,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141497700447218,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ing
ress-nginx-admission-patch-9cvjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 48b949bf-cead-485f-b08f-9359f9a13a28,},Annotations:map[string]string{io.kubernetes.container.hash: 3c1418f0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433f3b8e04289102708cae4fa4e1422c0dc9d8c43ba96a204bc8312e14f42212,PodSandboxId:8f7c52695c1f12310db8c5ba6ece0d380c5bebcf5a92df7764d80058d1f4f0f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1695141495930345166,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-xx2pf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 34f5fdfd-9dd9-4991-9b6d-c3bc2303f567,},Annotations:map[string]string{io.kubernetes.container.hash: 165d56ef,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920b249a69ea9bc2d241989861d47ef9234b0cc0f3cba5f6d30c6a1cb2209efb,PodSandboxId:c4b5e60bc45c508e4c7c6619861b379ecfb53306ca91741be7bc4b1627668331,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf8
0f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141474495588280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7ncj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6515d5b-40f8-4027-afc0-1f13e4efeb11,},Annotations:map[string]string{io.kubernetes.container.hash: d9af3d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22040a3d53b536dbc444b11b700a40bf06541608da2570a4ed934880f0acc0a5,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695141456004191376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be5d58f9e15674a5efae4e29a2dac121525f092c795b6a44ca80db8a6709adf,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695141424327902873,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e30fa2c0a41d85334abde832ab9a45c8682d0e1080cc89efe0421743f2e89d8,PodSandboxId:478906674d5438c4c15be162df7aaa86a58441a9b049587e93032c837350f35e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State
:CONTAINER_RUNNING,CreatedAt:1695141422192629312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zd4qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 426ae7e6-aea9-44f6-b00e-1d3a5e93e5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 812ef02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7abf9e9e4c8c56f30fa6f917aa04ca6973596ce647e958988e66277d4630a273,PodSandboxId:5bcdae46baa299359c4f732538e7181416adce62b94ed67f1bde61667b6461ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695
141412675214757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-flkc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad4ab33-2651-43cd-9d4f-185d140dec27,},Annotations:map[string]string{io.kubernetes.container.hash: 74d3fc2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0977df8ca96ce6b385ce2f8b552dc3ff6bf09a9073415dca148849f4322784a0,PodSandboxId:544b79188eb8d1b0af7c74d932b4e1e796386138852b6a9a957d898c83e76f1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e
6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695141387642443241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807d8666e3be11f3f5f5702f8ca8bba2,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbbdd1dfb9b0d10890df1eadbdf691a4dade526a2b8bd7dd981048d7c1329b7,PodSandboxId:515f750642b84241c9ea2bc5236e93a43e6bdc2f6df1e2e08fe220ed7b9b6935,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde
94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695141387401597229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba43d6c4ce89b42aa3dbd2fb38b3020,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ab4eb8ba0cd90742a9c350d4963030440cd7edcc8cf3b0374e22638117bc21,PodSandboxId:558681aa2031f1cd46f12585ad3ab53328865a62d985faba90b7703201c6f586,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2f
d2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695141387422793515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf8666c162eedd8ee6f0ad3b40dd753f,},Annotations:map[string]string{io.kubernetes.container.hash: 5da19477,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e21e23e1c11acf4d0ea8c57cea1bcef976233d3cad2f032d3ebf8c1cc3a1fb,PodSandboxId:85f2df22071cd738e3341b91c0e6d83612e896b4e756b235e0a19e52aece0126,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string
{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695141387238388014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a67ccb0c23db0fb8482528a3f30ae2,},Annotations:map[string]string{io.kubernetes.container.hash: eec86e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9aeb418-f1ee-42f6-a770-170791e9a203 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.470771347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e5d550d6-4e5d-411f-9b65-d289d104fd4c name=/runtime.v1.RuntimeService/Version
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.470822237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e5d550d6-4e5d-411f-9b65-d289d104fd4c name=/runtime.v1.RuntimeService/Version
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.471922712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=71750978-aa1d-4b82-94c7-cf42e0d3d714 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.473002449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695141673472985323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:484777,},InodesUsed:&UInt64Value{Value:202,},},},}" file="go-grpc-middleware/chain.go:25" id=71750978-aa1d-4b82-94c7-cf42e0d3d714 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.473713568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08697bf0-b717-45e3-a7ae-3bc823fa9722 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.473763883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08697bf0-b717-45e3-a7ae-3bc823fa9722 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:41:13 addons-897988 crio[718]: time="2023-09-19 16:41:13.474063898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21af637b27f2444f65f96c1b4a1d2c3e3401c8fde5a7ab00fc0e9dcf571981c2,PodSandboxId:c125e1c24bbd30f0c0209db7f84c65f3e9150688c4f0b8bdbfd14f4d1b589f86,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695141665906328194,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jd6cs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3532a0d3-1e5b-40e7-b77b-828f3ba927c7,},Annotations:map[string]string{io.kubernetes.container.hash: 5552bb64,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baaea065a819e8c6f9f5255ea135aad314ac8c49ccc585b4402ffdbe1954936b,PodSandboxId:b9744714d7ee18e1dd5625c40a51b0392a8537e0086448587ab57205fbf902f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695141526002219873,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ac2608-6ed2-49a6-97a1-9efaf2f4b32d,},Annotations:map[string]string{io.kubernet
es.container.hash: 96e1c392,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e558e16ec3ed0a5129b5db1dcaf399f3a5f682dbf5f2edd89cb75cd389b4d2,PodSandboxId:a641abdceb01065c37599f25a5ef5f894830853fd8a18417403f57a9946c29d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1695141518139448553,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-zm88j,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 960afd60-f5d7-4309-8163-df6fb3d4fb88,},Annotations:map[string]string{io.kubernetes.container.hash: e0a17fc,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2100e21b58ac1f64330495e736b4292b0d3a3a107b17774056508246c462a2a4,PodSandboxId:b86b674149d966870499ce7d54072ae118d797bb31d3097378f303257a24e6fe,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141497700447218,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ing
ress-nginx-admission-patch-9cvjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 48b949bf-cead-485f-b08f-9359f9a13a28,},Annotations:map[string]string{io.kubernetes.container.hash: 3c1418f0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433f3b8e04289102708cae4fa4e1422c0dc9d8c43ba96a204bc8312e14f42212,PodSandboxId:8f7c52695c1f12310db8c5ba6ece0d380c5bebcf5a92df7764d80058d1f4f0f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1695141495930345166,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-xx2pf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 34f5fdfd-9dd9-4991-9b6d-c3bc2303f567,},Annotations:map[string]string{io.kubernetes.container.hash: 165d56ef,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:920b249a69ea9bc2d241989861d47ef9234b0cc0f3cba5f6d30c6a1cb2209efb,PodSandboxId:c4b5e60bc45c508e4c7c6619861b379ecfb53306ca91741be7bc4b1627668331,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf8
0f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1695141474495588280,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7ncj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6515d5b-40f8-4027-afc0-1f13e4efeb11,},Annotations:map[string]string{io.kubernetes.container.hash: d9af3d2b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22040a3d53b536dbc444b11b700a40bf06541608da2570a4ed934880f0acc0a5,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695141456004191376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7be5d58f9e15674a5efae4e29a2dac121525f092c795b6a44ca80db8a6709adf,PodSandboxId:11e45f833b1f31293598ba7292668c4f6071a3c050c13f1cadddadf64f21ce9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695141424327902873,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aac4da53-a9fc-42b1-8e4a-f0020a1acaf5,},Annotations:map[string]string{io.kubernetes.container.hash: b92ed4c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e30fa2c0a41d85334abde832ab9a45c8682d0e1080cc89efe0421743f2e89d8,PodSandboxId:478906674d5438c4c15be162df7aaa86a58441a9b049587e93032c837350f35e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State
:CONTAINER_RUNNING,CreatedAt:1695141422192629312,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zd4qq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 426ae7e6-aea9-44f6-b00e-1d3a5e93e5cc,},Annotations:map[string]string{io.kubernetes.container.hash: 812ef02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7abf9e9e4c8c56f30fa6f917aa04ca6973596ce647e958988e66277d4630a273,PodSandboxId:5bcdae46baa299359c4f732538e7181416adce62b94ed67f1bde61667b6461ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695
141412675214757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-flkc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad4ab33-2651-43cd-9d4f-185d140dec27,},Annotations:map[string]string{io.kubernetes.container.hash: 74d3fc2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0977df8ca96ce6b385ce2f8b552dc3ff6bf09a9073415dca148849f4322784a0,PodSandboxId:544b79188eb8d1b0af7c74d932b4e1e796386138852b6a9a957d898c83e76f1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e
6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695141387642443241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807d8666e3be11f3f5f5702f8ca8bba2,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdbbdd1dfb9b0d10890df1eadbdf691a4dade526a2b8bd7dd981048d7c1329b7,PodSandboxId:515f750642b84241c9ea2bc5236e93a43e6bdc2f6df1e2e08fe220ed7b9b6935,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde
94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695141387401597229,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ba43d6c4ce89b42aa3dbd2fb38b3020,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ab4eb8ba0cd90742a9c350d4963030440cd7edcc8cf3b0374e22638117bc21,PodSandboxId:558681aa2031f1cd46f12585ad3ab53328865a62d985faba90b7703201c6f586,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2f
d2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695141387422793515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf8666c162eedd8ee6f0ad3b40dd753f,},Annotations:map[string]string{io.kubernetes.container.hash: 5da19477,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e21e23e1c11acf4d0ea8c57cea1bcef976233d3cad2f032d3ebf8c1cc3a1fb,PodSandboxId:85f2df22071cd738e3341b91c0e6d83612e896b4e756b235e0a19e52aece0126,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string
{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695141387238388014,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-897988,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7a67ccb0c23db0fb8482528a3f30ae2,},Annotations:map[string]string{io.kubernetes.container.hash: eec86e04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08697bf0-b717-45e3-a7ae-3bc823fa9722 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	21af637b27f24       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb                      7 seconds ago       Running             hello-world-app           0                   c125e1c24bbd3       hello-world-app-5d77478584-jd6cs
	baaea065a819e       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   b9744714d7ee1       nginx
	a2e558e16ec3e       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   a641abdceb010       headlamp-699c48fb74-zm88j
	2100e21b58ac1       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             2 minutes ago       Exited              patch                     3                   b86b674149d96       ingress-nginx-admission-patch-9cvjs
	433f3b8e04289       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   8f7c52695c1f1       gcp-auth-d4c87556c-xx2pf
	920b249a69ea9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   c4b5e60bc45c5       ingress-nginx-admission-create-s7ncj
	22040a3d53b53       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       1                   11e45f833b1f3       storage-provisioner
	7be5d58f9e156       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   11e45f833b1f3       storage-provisioner
	7e30fa2c0a41d       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                                             4 minutes ago       Running             kube-proxy                0                   478906674d543       kube-proxy-zd4qq
	7abf9e9e4c8c5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   5bcdae46baa29       coredns-5dd5756b68-flkc5
	0977df8ca96ce       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                                             4 minutes ago       Running             kube-scheduler            0                   544b79188eb8d       kube-scheduler-addons-897988
	f7ab4eb8ba0cd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   558681aa2031f       etcd-addons-897988
	cdbbdd1dfb9b0       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                                             4 minutes ago       Running             kube-controller-manager   0                   515f750642b84       kube-controller-manager-addons-897988
	50e21e23e1c11       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                                             4 minutes ago       Running             kube-apiserver            0                   85f2df22071cd       kube-apiserver-addons-897988
	
	* 
	* ==> coredns [7abf9e9e4c8c56f30fa6f917aa04ca6973596ce647e958988e66277d4630a273] <==
	* [INFO] 10.244.0.8:43041 - 21255 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001402243s
	[INFO] 10.244.0.8:34570 - 36671 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000129141s
	[INFO] 10.244.0.8:34570 - 63804 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149535s
	[INFO] 10.244.0.8:59130 - 25535 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068566s
	[INFO] 10.244.0.8:59130 - 8377 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060903s
	[INFO] 10.244.0.8:38488 - 13537 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084581s
	[INFO] 10.244.0.8:38488 - 37862 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000056595s
	[INFO] 10.244.0.8:39087 - 15623 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000139118s
	[INFO] 10.244.0.8:39087 - 50458 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000044321s
	[INFO] 10.244.0.8:58490 - 38073 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081017s
	[INFO] 10.244.0.8:58490 - 29367 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023724s
	[INFO] 10.244.0.8:57307 - 25792 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024109s
	[INFO] 10.244.0.8:57307 - 20674 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000021299s
	[INFO] 10.244.0.8:46882 - 53001 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031177s
	[INFO] 10.244.0.8:46882 - 25099 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000017676s
	[INFO] 10.244.0.19:43871 - 50557 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000557962s
	[INFO] 10.244.0.19:45335 - 56659 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000079197s
	[INFO] 10.244.0.19:36440 - 39091 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000192167s
	[INFO] 10.244.0.19:41974 - 57668 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000581948s
	[INFO] 10.244.0.19:35645 - 33868 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165022s
	[INFO] 10.244.0.19:53397 - 63936 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006405s
	[INFO] 10.244.0.19:42530 - 18954 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000565837s
	[INFO] 10.244.0.19:36366 - 63164 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000942558s
	[INFO] 10.244.0.22:40244 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000334259s
	[INFO] 10.244.0.22:43225 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171822s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-897988
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-897988
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=addons-897988
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T16_36_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-897988
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:36:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-897988
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:41:09 +0000   Tue, 19 Sep 2023 16:36:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:41:09 +0000   Tue, 19 Sep 2023 16:36:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:41:09 +0000   Tue, 19 Sep 2023 16:36:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:41:09 +0000   Tue, 19 Sep 2023 16:36:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    addons-897988
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 b089c61e2f794bb5b981fac041dfb7d0
	  System UUID:                b089c61e-2f79-4bb5-b981-fac041dfb7d0
	  Boot ID:                    fb840833-33d1-440f-a3a6-972d2f727412
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-jd6cs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-xx2pf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  headlamp                    headlamp-699c48fb74-zm88j                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-5dd5756b68-flkc5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m26s
	  kube-system                 etcd-addons-897988                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-apiserver-addons-897988             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-controller-manager-addons-897988    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-proxy-zd4qq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-addons-897988             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node addons-897988 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node addons-897988 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node addons-897988 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s                  kubelet          Node addons-897988 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s                  kubelet          Node addons-897988 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s                  kubelet          Node addons-897988 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m38s                  kubelet          Node addons-897988 status is now: NodeReady
	  Normal  RegisteredNode           4m26s                  node-controller  Node addons-897988 event: Registered Node addons-897988 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.103503] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.417836] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep19 16:36] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150949] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.059829] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.362516] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.109021] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.144024] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.113968] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.217540] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +10.808692] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +8.750240] systemd-fstab-generator[1251]: Ignoring "noauto" for root device
	[ +24.658259] kauditd_printk_skb: 54 callbacks suppressed
	[Sep19 16:37] kauditd_printk_skb: 4 callbacks suppressed
	[ +20.391890] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.251152] kauditd_printk_skb: 16 callbacks suppressed
	[Sep19 16:38] kauditd_printk_skb: 3 callbacks suppressed
	[ +19.950606] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.680325] kauditd_printk_skb: 25 callbacks suppressed
	[Sep19 16:39] kauditd_printk_skb: 15 callbacks suppressed
	[ +35.565968] kauditd_printk_skb: 12 callbacks suppressed
	[Sep19 16:41] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [f7ab4eb8ba0cd90742a9c350d4963030440cd7edcc8cf3b0374e22638117bc21] <==
	* {"level":"warn","ts":"2023-09-19T16:38:07.325924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.616385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78529"}
	{"level":"info","ts":"2023-09-19T16:38:07.325948Z","caller":"traceutil/trace.go:171","msg":"trace[497294982] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1054; }","duration":"131.65451ms","start":"2023-09-19T16:38:07.194287Z","end":"2023-09-19T16:38:07.325942Z","steps":["trace[497294982] 'agreement among raft nodes before linearized reading'  (duration: 131.487979ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:38:07.326133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.902236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11230"}
	{"level":"info","ts":"2023-09-19T16:38:07.326193Z","caller":"traceutil/trace.go:171","msg":"trace[1683925466] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1054; }","duration":"128.961891ms","start":"2023-09-19T16:38:07.197223Z","end":"2023-09-19T16:38:07.326185Z","steps":["trace[1683925466] 'agreement among raft nodes before linearized reading'  (duration: 128.877296ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T16:38:10.10535Z","caller":"traceutil/trace.go:171","msg":"trace[1659984533] linearizableReadLoop","detail":"{readStateIndex:1090; appliedIndex:1089; }","duration":"111.918955ms","start":"2023-09-19T16:38:09.993343Z","end":"2023-09-19T16:38:10.105262Z","steps":["trace[1659984533] 'read index received'  (duration: 111.4557ms)","trace[1659984533] 'applied index is now lower than readState.Index'  (duration: 462.911µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-19T16:38:10.105987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.642891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14197"}
	{"level":"info","ts":"2023-09-19T16:38:10.106211Z","caller":"traceutil/trace.go:171","msg":"trace[330435480] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"174.965208ms","start":"2023-09-19T16:38:09.931231Z","end":"2023-09-19T16:38:10.106196Z","steps":["trace[330435480] 'process raft request'  (duration: 173.672125ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T16:38:10.106243Z","caller":"traceutil/trace.go:171","msg":"trace[1304360184] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1057; }","duration":"112.819314ms","start":"2023-09-19T16:38:09.993318Z","end":"2023-09-19T16:38:10.106138Z","steps":["trace[1304360184] 'agreement among raft nodes before linearized reading'  (duration: 112.298256ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T16:38:15.119761Z","caller":"traceutil/trace.go:171","msg":"trace[342199485] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"138.515645ms","start":"2023-09-19T16:38:14.981207Z","end":"2023-09-19T16:38:15.119722Z","steps":["trace[342199485] 'process raft request'  (duration: 137.8655ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T16:38:37.970291Z","caller":"traceutil/trace.go:171","msg":"trace[1035552122] transaction","detail":"{read_only:false; response_revision:1289; number_of_response:1; }","duration":"472.621578ms","start":"2023-09-19T16:38:37.497656Z","end":"2023-09-19T16:38:37.970277Z","steps":["trace[1035552122] 'process raft request'  (duration: 472.527441ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T16:38:37.970574Z","caller":"traceutil/trace.go:171","msg":"trace[174802119] linearizableReadLoop","detail":"{readStateIndex:1330; appliedIndex:1330; }","duration":"469.496087ms","start":"2023-09-19T16:38:37.501065Z","end":"2023-09-19T16:38:37.970561Z","steps":["trace[174802119] 'read index received'  (duration: 469.488998ms)","trace[174802119] 'applied index is now lower than readState.Index'  (duration: 5.528µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-19T16:38:37.97073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"469.721985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2023-09-19T16:38:37.970782Z","caller":"traceutil/trace.go:171","msg":"trace[1693091700] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1289; }","duration":"469.784448ms","start":"2023-09-19T16:38:37.50099Z","end":"2023-09-19T16:38:37.970775Z","steps":["trace[1693091700] 'agreement among raft nodes before linearized reading'  (duration: 469.666311ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:38:37.970811Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T16:38:37.500981Z","time spent":"469.823889ms","remote":"127.0.0.1:57726","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":577,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2023-09-19T16:38:37.970737Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T16:38:37.497642Z","time spent":"472.815846ms","remote":"127.0.0.1:57698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1278 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-19T16:38:37.983607Z","caller":"traceutil/trace.go:171","msg":"trace[462133363] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"268.510597ms","start":"2023-09-19T16:38:37.715083Z","end":"2023-09-19T16:38:37.983593Z","steps":["trace[462133363] 'process raft request'  (duration: 259.056066ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:38:37.985001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"437.064595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5732"}
	{"level":"info","ts":"2023-09-19T16:38:37.985247Z","caller":"traceutil/trace.go:171","msg":"trace[413179479] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1290; }","duration":"437.407489ms","start":"2023-09-19T16:38:37.547831Z","end":"2023-09-19T16:38:37.985239Z","steps":["trace[413179479] 'agreement among raft nodes before linearized reading'  (duration: 437.032028ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:38:37.985962Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T16:38:37.547818Z","time spent":"438.130331ms","remote":"127.0.0.1:57702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5755,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-09-19T16:38:37.988695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.61795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T16:38:37.98943Z","caller":"traceutil/trace.go:171","msg":"trace[2096878213] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1291; }","duration":"376.282838ms","start":"2023-09-19T16:38:37.613062Z","end":"2023-09-19T16:38:37.989344Z","steps":["trace[2096878213] 'agreement among raft nodes before linearized reading'  (duration: 373.052854ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:38:37.989568Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T16:38:37.613045Z","time spent":"376.511701ms","remote":"127.0.0.1:57738","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":28,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true "}
	{"level":"info","ts":"2023-09-19T16:39:08.387648Z","caller":"traceutil/trace.go:171","msg":"trace[2033113171] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"140.320929ms","start":"2023-09-19T16:39:08.247295Z","end":"2023-09-19T16:39:08.387616Z","steps":["trace[2033113171] 'process raft request'  (duration: 140.078473ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:39:19.444011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.657902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2023-09-19T16:39:19.444816Z","caller":"traceutil/trace.go:171","msg":"trace[263173049] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1477; }","duration":"158.649143ms","start":"2023-09-19T16:39:19.286152Z","end":"2023-09-19T16:39:19.444801Z","steps":["trace[263173049] 'range keys from in-memory index tree'  (duration: 157.533892ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [433f3b8e04289102708cae4fa4e1422c0dc9d8c43ba96a204bc8312e14f42212] <==
	* 2023/09/19 16:38:16 GCP Auth Webhook started!
	2023/09/19 16:38:26 http: TLS handshake error from 10.244.0.1:50640: remote error: tls: bad certificate
	2023/09/19 16:38:29 Ready to marshal response ...
	2023/09/19 16:38:29 Ready to write response ...
	2023/09/19 16:38:29 Ready to marshal response ...
	2023/09/19 16:38:29 Ready to write response ...
	2023/09/19 16:38:29 Ready to marshal response ...
	2023/09/19 16:38:29 Ready to write response ...
	2023/09/19 16:38:31 Ready to marshal response ...
	2023/09/19 16:38:31 Ready to write response ...
	2023/09/19 16:38:35 Ready to marshal response ...
	2023/09/19 16:38:35 Ready to write response ...
	2023/09/19 16:38:39 Ready to marshal response ...
	2023/09/19 16:38:39 Ready to write response ...
	2023/09/19 16:38:42 Ready to marshal response ...
	2023/09/19 16:38:42 Ready to write response ...
	2023/09/19 16:39:11 Ready to marshal response ...
	2023/09/19 16:39:11 Ready to write response ...
	2023/09/19 16:39:28 Ready to marshal response ...
	2023/09/19 16:39:28 Ready to write response ...
	2023/09/19 16:41:02 Ready to marshal response ...
	2023/09/19 16:41:02 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  16:41:13 up 5 min,  0 users,  load average: 1.13, 1.90, 0.98
	Linux addons-897988 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [50e21e23e1c11acf4d0ea8c57cea1bcef976233d3cad2f032d3ebf8c1cc3a1fb] <==
	* I0919 16:38:29.311339       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.216.115"}
	I0919 16:38:35.523589       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0919 16:38:39.325802       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 16:38:39.634064       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.208.5"}
	I0919 16:39:25.411695       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 16:39:46.829743       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.829855       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.845301       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.845392       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.855376       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.855450       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.867287       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.867346       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.884923       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.885012       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.909319       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.909405       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.915941       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.916001       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 16:39:46.939704       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 16:39:46.939803       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 16:39:47.869191       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 16:39:47.940554       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0919 16:39:47.940586       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0919 16:41:02.623717       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.47.27"}
	
	* 
	* ==> kube-controller-manager [cdbbdd1dfb9b0d10890df1eadbdf691a4dade526a2b8bd7dd981048d7c1329b7] <==
	* W0919 16:40:09.641949       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:09.642007       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0919 16:40:22.389276       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:22.389333       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0919 16:40:29.082138       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:29.082264       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0919 16:40:30.853436       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:30.853613       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0919 16:40:39.235566       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:39.235681       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0919 16:40:55.583733       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:55.583831       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0919 16:40:55.772965       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 16:40:55.773059       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0919 16:41:02.383890       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0919 16:41:02.422445       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-jd6cs"
	I0919 16:41:02.444268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.322631ms"
	I0919 16:41:02.458788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.37528ms"
	I0919 16:41:02.459419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="151.514µs"
	I0919 16:41:02.469595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="120.861µs"
	I0919 16:41:05.568659       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0919 16:41:05.588959       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0919 16:41:05.597039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="6.228µs"
	I0919 16:41:06.796258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.535512ms"
	I0919 16:41:06.797216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.97µs"
	
	* 
	* ==> kube-proxy [7e30fa2c0a41d85334abde832ab9a45c8682d0e1080cc89efe0421743f2e89d8] <==
	* I0919 16:37:04.395248       1 server_others.go:69] "Using iptables proxy"
	I0919 16:37:04.413823       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I0919 16:37:04.630690       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 16:37:04.630794       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 16:37:04.736752       1 server_others.go:152] "Using iptables Proxier"
	I0919 16:37:04.736864       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 16:37:04.740299       1 server.go:846] "Version info" version="v1.28.2"
	I0919 16:37:04.740376       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:37:04.760254       1 config.go:188] "Starting service config controller"
	I0919 16:37:04.760322       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 16:37:04.760348       1 config.go:97] "Starting endpoint slice config controller"
	I0919 16:37:04.760352       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 16:37:04.773439       1 config.go:315] "Starting node config controller"
	I0919 16:37:04.773570       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 16:37:04.887118       1 shared_informer.go:318] Caches are synced for node config
	I0919 16:37:04.947839       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 16:37:04.954039       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [0977df8ca96ce6b385ce2f8b552dc3ff6bf09a9073415dca148849f4322784a0] <==
	* W0919 16:36:31.341027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 16:36:31.341075       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0919 16:36:32.159472       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:36:32.159581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 16:36:32.182343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:36:32.182462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 16:36:32.267446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 16:36:32.267627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 16:36:32.290114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 16:36:32.290192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 16:36:32.315802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:36:32.315858       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 16:36:32.329710       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 16:36:32.329763       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 16:36:32.386002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 16:36:32.386053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 16:36:32.391461       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 16:36:32.391555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 16:36:32.399570       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:36:32.399619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 16:36:32.463255       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 16:36:32.463324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 16:36:32.540038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 16:36:32.540124       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0919 16:36:34.823568       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:36:02 UTC, ends at Tue 2023-09-19 16:41:14 UTC. --
	Sep 19 16:41:02 addons-897988 kubelet[1258]: I0919 16:41:02.438781    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="0caf6205-0b6f-454c-ab22-f2dc9e577eda" containerName="csi-resizer"
	Sep 19 16:41:02 addons-897988 kubelet[1258]: I0919 16:41:02.438792    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="f5c70436-bfae-4091-910d-7f1ba70a103d" containerName="volume-snapshot-controller"
	Sep 19 16:41:02 addons-897988 kubelet[1258]: I0919 16:41:02.438798    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="03ebb44b-1214-4a03-84ab-9e6197ee8f9e" containerName="csi-provisioner"
	Sep 19 16:41:02 addons-897988 kubelet[1258]: I0919 16:41:02.438804    1258 memory_manager.go:346] "RemoveStaleState removing state" podUID="03ebb44b-1214-4a03-84ab-9e6197ee8f9e" containerName="liveness-probe"
	Sep 19 16:41:02 addons-897988 kubelet[1258]: I0919 16:41:02.452732    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3532a0d3-1e5b-40e7-b77b-828f3ba927c7-gcp-creds\") pod \"hello-world-app-5d77478584-jd6cs\" (UID: \"3532a0d3-1e5b-40e7-b77b-828f3ba927c7\") " pod="default/hello-world-app-5d77478584-jd6cs"
	Sep 19 16:41:02 addons-897988 kubelet[1258]: I0919 16:41:02.452774    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbmx\" (UniqueName: \"kubernetes.io/projected/3532a0d3-1e5b-40e7-b77b-828f3ba927c7-kube-api-access-8gbmx\") pod \"hello-world-app-5d77478584-jd6cs\" (UID: \"3532a0d3-1e5b-40e7-b77b-828f3ba927c7\") " pod="default/hello-world-app-5d77478584-jd6cs"
	Sep 19 16:41:03 addons-897988 kubelet[1258]: I0919 16:41:03.863696    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qnws\" (UniqueName: \"kubernetes.io/projected/32e13698-283b-49d8-a578-e41da4746dd0-kube-api-access-8qnws\") pod \"32e13698-283b-49d8-a578-e41da4746dd0\" (UID: \"32e13698-283b-49d8-a578-e41da4746dd0\") "
	Sep 19 16:41:03 addons-897988 kubelet[1258]: I0919 16:41:03.870324    1258 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e13698-283b-49d8-a578-e41da4746dd0-kube-api-access-8qnws" (OuterVolumeSpecName: "kube-api-access-8qnws") pod "32e13698-283b-49d8-a578-e41da4746dd0" (UID: "32e13698-283b-49d8-a578-e41da4746dd0"). InnerVolumeSpecName "kube-api-access-8qnws". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 16:41:03 addons-897988 kubelet[1258]: I0919 16:41:03.964305    1258 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8qnws\" (UniqueName: \"kubernetes.io/projected/32e13698-283b-49d8-a578-e41da4746dd0-kube-api-access-8qnws\") on node \"addons-897988\" DevicePath \"\""
	Sep 19 16:41:04 addons-897988 kubelet[1258]: I0919 16:41:04.753821    1258 scope.go:117] "RemoveContainer" containerID="e3c9a5507e6420bc113548fccdee0b86f3ca6e77bae05022deb56722a46c71b1"
	Sep 19 16:41:06 addons-897988 kubelet[1258]: I0919 16:41:06.653719    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="32e13698-283b-49d8-a578-e41da4746dd0" path="/var/lib/kubelet/pods/32e13698-283b-49d8-a578-e41da4746dd0/volumes"
	Sep 19 16:41:06 addons-897988 kubelet[1258]: I0919 16:41:06.654283    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="48b949bf-cead-485f-b08f-9359f9a13a28" path="/var/lib/kubelet/pods/48b949bf-cead-485f-b08f-9359f9a13a28/volumes"
	Sep 19 16:41:06 addons-897988 kubelet[1258]: I0919 16:41:06.654924    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f6515d5b-40f8-4027-afc0-1f13e4efeb11" path="/var/lib/kubelet/pods/f6515d5b-40f8-4027-afc0-1f13e4efeb11/volumes"
	Sep 19 16:41:06 addons-897988 kubelet[1258]: I0919 16:41:06.781682    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-jd6cs" podStartSLOduration=2.432358866 podCreationTimestamp="2023-09-19 16:41:02 +0000 UTC" firstStartedPulling="2023-09-19 16:41:03.534474857 +0000 UTC m=+269.064920469" lastFinishedPulling="2023-09-19 16:41:05.883730025 +0000 UTC m=+271.414175636" observedRunningTime="2023-09-19 16:41:06.780881762 +0000 UTC m=+272.311327394" watchObservedRunningTime="2023-09-19 16:41:06.781614033 +0000 UTC m=+272.312059663"
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.202154    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zr2l\" (UniqueName: \"kubernetes.io/projected/91cb1829-40fd-47b6-963f-3ce4855bc6e5-kube-api-access-4zr2l\") pod \"91cb1829-40fd-47b6-963f-3ce4855bc6e5\" (UID: \"91cb1829-40fd-47b6-963f-3ce4855bc6e5\") "
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.202237    1258 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91cb1829-40fd-47b6-963f-3ce4855bc6e5-webhook-cert\") pod \"91cb1829-40fd-47b6-963f-3ce4855bc6e5\" (UID: \"91cb1829-40fd-47b6-963f-3ce4855bc6e5\") "
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.207447    1258 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91cb1829-40fd-47b6-963f-3ce4855bc6e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "91cb1829-40fd-47b6-963f-3ce4855bc6e5" (UID: "91cb1829-40fd-47b6-963f-3ce4855bc6e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.207674    1258 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91cb1829-40fd-47b6-963f-3ce4855bc6e5-kube-api-access-4zr2l" (OuterVolumeSpecName: "kube-api-access-4zr2l") pod "91cb1829-40fd-47b6-963f-3ce4855bc6e5" (UID: "91cb1829-40fd-47b6-963f-3ce4855bc6e5"). InnerVolumeSpecName "kube-api-access-4zr2l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.303304    1258 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91cb1829-40fd-47b6-963f-3ce4855bc6e5-webhook-cert\") on node \"addons-897988\" DevicePath \"\""
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.303370    1258 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4zr2l\" (UniqueName: \"kubernetes.io/projected/91cb1829-40fd-47b6-963f-3ce4855bc6e5-kube-api-access-4zr2l\") on node \"addons-897988\" DevicePath \"\""
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.781895    1258 scope.go:117] "RemoveContainer" containerID="1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043"
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.812190    1258 scope.go:117] "RemoveContainer" containerID="1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043"
	Sep 19 16:41:09 addons-897988 kubelet[1258]: E0919 16:41:09.812803    1258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043\": container with ID starting with 1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043 not found: ID does not exist" containerID="1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043"
	Sep 19 16:41:09 addons-897988 kubelet[1258]: I0919 16:41:09.812878    1258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043"} err="failed to get container status \"1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043\": rpc error: code = NotFound desc = could not find container \"1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043\": container with ID starting with 1ec645437cca5c92c9c36847056f4473a99b955ece676da5a8e34305629b7043 not found: ID does not exist"
	Sep 19 16:41:10 addons-897988 kubelet[1258]: I0919 16:41:10.654424    1258 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="91cb1829-40fd-47b6-963f-3ce4855bc6e5" path="/var/lib/kubelet/pods/91cb1829-40fd-47b6-963f-3ce4855bc6e5/volumes"
	
	* 
	* ==> storage-provisioner [22040a3d53b536dbc444b11b700a40bf06541608da2570a4ed934880f0acc0a5] <==
	* I0919 16:37:36.262652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 16:37:36.275703       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 16:37:36.275884       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 16:37:36.290081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 16:37:36.291728       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-897988_d2784e64-9bba-4f95-8cff-9b507626f60e!
	I0919 16:37:36.297021       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eec3327e-5d21-4365-82aa-138f704481b7", APIVersion:"v1", ResourceVersion:"899", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-897988_d2784e64-9bba-4f95-8cff-9b507626f60e became leader
	I0919 16:37:36.393229       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-897988_d2784e64-9bba-4f95-8cff-9b507626f60e!
	
	* 
	* ==> storage-provisioner [7be5d58f9e15674a5efae4e29a2dac121525f092c795b6a44ca80db8a6709adf] <==
	* I0919 16:37:05.070034       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 16:37:35.073845       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-897988 -n addons-897988
helpers_test.go:261: (dbg) Run:  kubectl --context addons-897988 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-897988
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-897988: exit status 82 (2m1.728805894s)

                                                
                                                
-- stdout --
	* Stopping node "addons-897988"  ...
	* Stopping node "addons-897988"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-897988" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-897988
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-897988: exit status 11 (21.666139422s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.206:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-897988" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-897988
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-897988: exit status 11 (6.143201911s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.206:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-897988" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-897988
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-897988: exit status 11 (6.144210573s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.206:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-897988" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.794527629s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image ls: (2.385197251s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-225429" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (169.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-845293 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-845293 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.606704757s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-845293 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-845293 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.01986828s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0919 16:52:56.283004   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.288293   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.298576   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.318838   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.359189   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.439523   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.600018   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:56.920611   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:57.560865   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:52:58.841384   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:53:01.402282   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:53:06.522594   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:53:16.763397   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:53:21.266332   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:53:37.243992   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-845293 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.540497921s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-845293 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
E0919 16:53:48.947581   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.244
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons disable ingress-dns --alsologtostderr -v=1: (5.313852357s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons disable ingress --alsologtostderr -v=1: (7.524480798s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-845293 -n ingress-addon-legacy-845293
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845293 logs -n 25: (1.146905775s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-225429 image ls                                                | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	| image          | functional-225429 image load                                              | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| service        | functional-225429 service                                                 | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | hello-node-connect --url                                                  |                             |         |         |                     |                     |
	| addons         | functional-225429 addons list                                             | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	| addons         | functional-225429 addons list                                             | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | -o json                                                                   |                             |         |         |                     |                     |
	| update-context | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-225429 image ls                                                | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	| update-context | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-225429 image save --daemon                                     | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-225429                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-225429 ssh pgrep                                               | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-225429 image build -t                                          | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | localhost/my-image:functional-225429                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-225429                                                         | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-225429 image ls                                                | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	| delete         | -p functional-225429                                                      | functional-225429           | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:48 UTC |
	| start          | -p ingress-addon-legacy-845293                                            | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:48 UTC | 19 Sep 23 16:50 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-845293                                               | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:50 UTC | 19 Sep 23 16:51 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-845293                                               | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:51 UTC | 19 Sep 23 16:51 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-845293                                               | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:51 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-845293 ip                                            | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:53 UTC | 19 Sep 23 16:53 UTC |
	| addons         | ingress-addon-legacy-845293                                               | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:53 UTC | 19 Sep 23 16:53 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-845293                                               | ingress-addon-legacy-845293 | jenkins | v1.31.2 | 19 Sep 23 16:53 UTC | 19 Sep 23 16:54 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:48:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:48:56.960126   21496 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:48:56.960361   21496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:48:56.960371   21496 out.go:309] Setting ErrFile to fd 2...
	I0919 16:48:56.960376   21496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:48:56.960626   21496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 16:48:56.961218   21496 out.go:303] Setting JSON to false
	I0919 16:48:56.962045   21496 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1887,"bootTime":1695140250,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:48:56.962100   21496 start.go:138] virtualization: kvm guest
	I0919 16:48:56.964128   21496 out.go:177] * [ingress-addon-legacy-845293] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:48:56.965496   21496 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:48:56.965497   21496 notify.go:220] Checking for updates...
	I0919 16:48:56.966862   21496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:48:56.968230   21496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:48:56.969495   21496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:48:56.970802   21496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:48:56.972063   21496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:48:56.973480   21496 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:48:57.007192   21496 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 16:48:57.008540   21496 start.go:298] selected driver: kvm2
	I0919 16:48:57.008553   21496 start.go:902] validating driver "kvm2" against <nil>
	I0919 16:48:57.008564   21496 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:48:57.009266   21496 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:48:57.009354   21496 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:48:57.023137   21496 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:48:57.023191   21496 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 16:48:57.023383   21496 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 16:48:57.023417   21496 cni.go:84] Creating CNI manager for ""
	I0919 16:48:57.023429   21496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:48:57.023442   21496 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 16:48:57.023455   21496 start_flags.go:321] config:
	{Name:ingress-addon-legacy-845293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:48:57.023606   21496 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:48:57.026159   21496 out.go:177] * Starting control plane node ingress-addon-legacy-845293 in cluster ingress-addon-legacy-845293
	I0919 16:48:57.027327   21496 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0919 16:48:57.136530   21496 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0919 16:48:57.136565   21496 cache.go:57] Caching tarball of preloaded images
	I0919 16:48:57.136732   21496 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0919 16:48:57.138722   21496 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0919 16:48:57.140299   21496 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:48:57.255011   21496 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0919 16:49:16.533500   21496 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:49:16.533593   21496 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:49:17.515774   21496 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0919 16:49:17.516108   21496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/config.json ...
	I0919 16:49:17.516139   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/config.json: {Name:mk58f6c38af9cf8a22a7ba0b7eddafaf10c47ff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:17.516289   21496 start.go:365] acquiring machines lock for ingress-addon-legacy-845293: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 16:49:17.516319   21496 start.go:369] acquired machines lock for "ingress-addon-legacy-845293" in 15.306µs
	I0919 16:49:17.516335   21496 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-845293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 16:49:17.516426   21496 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 16:49:17.519287   21496 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0919 16:49:17.519437   21496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:49:17.519470   21496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:49:17.532929   21496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I0919 16:49:17.533380   21496 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:49:17.533945   21496 main.go:141] libmachine: Using API Version  1
	I0919 16:49:17.533964   21496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:49:17.534306   21496 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:49:17.534461   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetMachineName
	I0919 16:49:17.534598   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:17.534719   21496 start.go:159] libmachine.API.Create for "ingress-addon-legacy-845293" (driver="kvm2")
	I0919 16:49:17.534746   21496 client.go:168] LocalClient.Create starting
	I0919 16:49:17.534784   21496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 16:49:17.534822   21496 main.go:141] libmachine: Decoding PEM data...
	I0919 16:49:17.534839   21496 main.go:141] libmachine: Parsing certificate...
	I0919 16:49:17.534889   21496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 16:49:17.534908   21496 main.go:141] libmachine: Decoding PEM data...
	I0919 16:49:17.534920   21496 main.go:141] libmachine: Parsing certificate...
	I0919 16:49:17.534934   21496 main.go:141] libmachine: Running pre-create checks...
	I0919 16:49:17.534944   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .PreCreateCheck
	I0919 16:49:17.535223   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetConfigRaw
	I0919 16:49:17.535569   21496 main.go:141] libmachine: Creating machine...
	I0919 16:49:17.535582   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .Create
	I0919 16:49:17.535718   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Creating KVM machine...
	I0919 16:49:17.536924   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found existing default KVM network
	I0919 16:49:17.537582   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:17.537458   21565 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0919 16:49:17.542523   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | trying to create private KVM network mk-ingress-addon-legacy-845293 192.168.39.0/24...
	I0919 16:49:17.607160   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293 ...
	I0919 16:49:17.607197   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:49:17.607211   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | private KVM network mk-ingress-addon-legacy-845293 192.168.39.0/24 created
	I0919 16:49:17.607234   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:17.607108   21565 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:49:17.607254   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 16:49:17.808088   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:17.807954   21565 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa...
	I0919 16:49:18.008599   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:18.008476   21565 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/ingress-addon-legacy-845293.rawdisk...
	I0919 16:49:18.008708   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Writing magic tar header
	I0919 16:49:18.008735   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293 (perms=drwx------)
	I0919 16:49:18.008746   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Writing SSH key tar header
	I0919 16:49:18.008769   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:18.008585   21565 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293 ...
	I0919 16:49:18.008790   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293
	I0919 16:49:18.008811   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 16:49:18.008824   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 16:49:18.008842   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 16:49:18.008861   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 16:49:18.008871   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 16:49:18.008880   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 16:49:18.008890   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Creating domain...
	I0919 16:49:18.008906   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:49:18.008926   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 16:49:18.008940   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 16:49:18.008951   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home/jenkins
	I0919 16:49:18.008962   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Checking permissions on dir: /home
	I0919 16:49:18.008974   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Skipping /home - not owner
	I0919 16:49:18.009776   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) define libvirt domain using xml: 
	I0919 16:49:18.009801   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) <domain type='kvm'>
	I0919 16:49:18.009815   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <name>ingress-addon-legacy-845293</name>
	I0919 16:49:18.009824   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <memory unit='MiB'>4096</memory>
	I0919 16:49:18.009832   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <vcpu>2</vcpu>
	I0919 16:49:18.009841   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <features>
	I0919 16:49:18.009853   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <acpi/>
	I0919 16:49:18.009863   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <apic/>
	I0919 16:49:18.009876   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <pae/>
	I0919 16:49:18.009889   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     
	I0919 16:49:18.009919   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   </features>
	I0919 16:49:18.009949   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <cpu mode='host-passthrough'>
	I0919 16:49:18.009961   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   
	I0919 16:49:18.009973   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   </cpu>
	I0919 16:49:18.009985   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <os>
	I0919 16:49:18.009999   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <type>hvm</type>
	I0919 16:49:18.010044   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <boot dev='cdrom'/>
	I0919 16:49:18.010068   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <boot dev='hd'/>
	I0919 16:49:18.010077   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <bootmenu enable='no'/>
	I0919 16:49:18.010087   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   </os>
	I0919 16:49:18.010094   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   <devices>
	I0919 16:49:18.010101   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <disk type='file' device='cdrom'>
	I0919 16:49:18.010115   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/boot2docker.iso'/>
	I0919 16:49:18.010135   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <target dev='hdc' bus='scsi'/>
	I0919 16:49:18.010142   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <readonly/>
	I0919 16:49:18.010150   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </disk>
	I0919 16:49:18.010156   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <disk type='file' device='disk'>
	I0919 16:49:18.010164   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 16:49:18.010175   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/ingress-addon-legacy-845293.rawdisk'/>
	I0919 16:49:18.010187   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <target dev='hda' bus='virtio'/>
	I0919 16:49:18.010194   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </disk>
	I0919 16:49:18.010203   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <interface type='network'>
	I0919 16:49:18.010210   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <source network='mk-ingress-addon-legacy-845293'/>
	I0919 16:49:18.010216   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <model type='virtio'/>
	I0919 16:49:18.010223   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </interface>
	I0919 16:49:18.010233   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <interface type='network'>
	I0919 16:49:18.010242   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <source network='default'/>
	I0919 16:49:18.010247   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <model type='virtio'/>
	I0919 16:49:18.010256   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </interface>
	I0919 16:49:18.010262   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <serial type='pty'>
	I0919 16:49:18.010271   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <target port='0'/>
	I0919 16:49:18.010277   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </serial>
	I0919 16:49:18.010285   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <console type='pty'>
	I0919 16:49:18.010294   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <target type='serial' port='0'/>
	I0919 16:49:18.010301   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </console>
	I0919 16:49:18.010312   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     <rng model='virtio'>
	I0919 16:49:18.010331   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)       <backend model='random'>/dev/random</backend>
	I0919 16:49:18.010349   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     </rng>
	I0919 16:49:18.010359   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     
	I0919 16:49:18.010369   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)     
	I0919 16:49:18.010383   21496 main.go:141] libmachine: (ingress-addon-legacy-845293)   </devices>
	I0919 16:49:18.010396   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) </domain>
	I0919 16:49:18.010411   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) 
	I0919 16:49:18.014562   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:03:8a:0a in network default
	I0919 16:49:18.015140   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Ensuring networks are active...
	I0919 16:49:18.015171   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:18.015749   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Ensuring network default is active
	I0919 16:49:18.016122   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Ensuring network mk-ingress-addon-legacy-845293 is active
	I0919 16:49:18.016618   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Getting domain xml...
	I0919 16:49:18.017254   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Creating domain...
	I0919 16:49:19.213479   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Waiting to get IP...
	I0919 16:49:19.214323   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:19.214715   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:19.214744   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:19.214697   21565 retry.go:31] will retry after 243.034847ms: waiting for machine to come up
	I0919 16:49:19.459201   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:19.459691   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:19.459722   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:19.459633   21565 retry.go:31] will retry after 272.149696ms: waiting for machine to come up
	I0919 16:49:19.732994   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:19.733414   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:19.733437   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:19.733373   21565 retry.go:31] will retry after 466.467891ms: waiting for machine to come up
	I0919 16:49:20.200900   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:20.201346   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:20.201374   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:20.201310   21565 retry.go:31] will retry after 490.769672ms: waiting for machine to come up
	I0919 16:49:20.693950   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:20.694339   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:20.694371   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:20.694285   21565 retry.go:31] will retry after 560.965122ms: waiting for machine to come up
	I0919 16:49:21.257090   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:21.257574   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:21.257638   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:21.257570   21565 retry.go:31] will retry after 732.998279ms: waiting for machine to come up
	I0919 16:49:21.992549   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:21.992904   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:21.992946   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:21.992866   21565 retry.go:31] will retry after 788.253721ms: waiting for machine to come up
	I0919 16:49:22.783139   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:22.783552   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:22.783581   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:22.783500   21565 retry.go:31] will retry after 1.182883185s: waiting for machine to come up
	I0919 16:49:23.967736   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:23.968127   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:23.968157   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:23.968076   21565 retry.go:31] will retry after 1.27670604s: waiting for machine to come up
	I0919 16:49:25.246379   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:25.246696   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:25.246725   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:25.246642   21565 retry.go:31] will retry after 1.97262712s: waiting for machine to come up
	I0919 16:49:27.221682   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:27.222138   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:27.222169   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:27.222085   21565 retry.go:31] will retry after 2.333731325s: waiting for machine to come up
	I0919 16:49:29.557373   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:29.557782   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:29.557816   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:29.557726   21565 retry.go:31] will retry after 2.400032237s: waiting for machine to come up
	I0919 16:49:31.958972   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:31.959287   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:31.959312   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:31.959256   21565 retry.go:31] will retry after 4.536733183s: waiting for machine to come up
	I0919 16:49:36.500252   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:36.500602   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find current IP address of domain ingress-addon-legacy-845293 in network mk-ingress-addon-legacy-845293
	I0919 16:49:36.500645   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | I0919 16:49:36.500560   21565 retry.go:31] will retry after 3.889438701s: waiting for machine to come up
	I0919 16:49:40.393585   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.393991   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has current primary IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.394009   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Found IP for machine: 192.168.39.244
	I0919 16:49:40.394029   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Reserving static IP address...
	I0919 16:49:40.394326   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-845293", mac: "52:54:00:f3:73:d3", ip: "192.168.39.244"} in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.462926   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Getting to WaitForSSH function...
	I0919 16:49:40.462962   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Reserved static IP address: 192.168.39.244
	I0919 16:49:40.462976   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Waiting for SSH to be available...
	I0919 16:49:40.465197   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.465510   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:40.465562   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.465634   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Using SSH client type: external
	I0919 16:49:40.465665   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa (-rw-------)
	I0919 16:49:40.465706   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 16:49:40.465718   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | About to run SSH command:
	I0919 16:49:40.465726   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | exit 0
	I0919 16:49:40.552164   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | SSH cmd err, output: <nil>: 
	I0919 16:49:40.552455   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) KVM machine creation complete!
	I0919 16:49:40.552764   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetConfigRaw
	I0919 16:49:40.553268   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:40.553439   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:40.553588   21496 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 16:49:40.553612   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetState
	I0919 16:49:40.554726   21496 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 16:49:40.554742   21496 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 16:49:40.554751   21496 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 16:49:40.554761   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:40.556981   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.557404   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:40.557438   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.557582   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:40.557757   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.557906   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.558030   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:40.558209   21496 main.go:141] libmachine: Using SSH client type: native
	I0919 16:49:40.558672   21496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0919 16:49:40.558689   21496 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 16:49:40.667693   21496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:49:40.667716   21496 main.go:141] libmachine: Detecting the provisioner...
	I0919 16:49:40.667727   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:40.670719   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.671033   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:40.671071   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.671209   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:40.671387   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.671527   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.671676   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:40.671802   21496 main.go:141] libmachine: Using SSH client type: native
	I0919 16:49:40.672189   21496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0919 16:49:40.672203   21496 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 16:49:40.781181   21496 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 16:49:40.781264   21496 main.go:141] libmachine: found compatible host: buildroot
	I0919 16:49:40.781284   21496 main.go:141] libmachine: Provisioning with buildroot...
	I0919 16:49:40.781297   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetMachineName
	I0919 16:49:40.781529   21496 buildroot.go:166] provisioning hostname "ingress-addon-legacy-845293"
	I0919 16:49:40.781553   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetMachineName
	I0919 16:49:40.781760   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:40.783870   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.784264   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:40.784301   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.784449   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:40.784633   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.784769   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.784878   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:40.785002   21496 main.go:141] libmachine: Using SSH client type: native
	I0919 16:49:40.785308   21496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0919 16:49:40.785321   21496 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-845293 && echo "ingress-addon-legacy-845293" | sudo tee /etc/hostname
	I0919 16:49:40.905570   21496 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-845293
	
	I0919 16:49:40.905616   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:40.908450   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.908830   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:40.908858   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:40.909115   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:40.909337   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.909502   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:40.909626   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:40.909747   21496 main.go:141] libmachine: Using SSH client type: native
	I0919 16:49:40.910058   21496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0919 16:49:40.910093   21496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-845293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-845293/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-845293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 16:49:41.025245   21496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:49:41.025279   21496 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 16:49:41.025302   21496 buildroot.go:174] setting up certificates
	I0919 16:49:41.025319   21496 provision.go:83] configureAuth start
	I0919 16:49:41.025332   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetMachineName
	I0919 16:49:41.025654   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetIP
	I0919 16:49:41.028149   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.028454   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.028484   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.028642   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.030769   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.031089   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.031123   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.031234   21496 provision.go:138] copyHostCerts
	I0919 16:49:41.031262   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 16:49:41.031301   21496 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 16:49:41.031314   21496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 16:49:41.031398   21496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 16:49:41.031529   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 16:49:41.031556   21496 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 16:49:41.031563   21496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 16:49:41.031602   21496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 16:49:41.031664   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 16:49:41.031686   21496 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 16:49:41.031696   21496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 16:49:41.031730   21496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 16:49:41.031789   21496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-845293 san=[192.168.39.244 192.168.39.244 localhost 127.0.0.1 minikube ingress-addon-legacy-845293]
	I0919 16:49:41.199357   21496 provision.go:172] copyRemoteCerts
	I0919 16:49:41.199415   21496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 16:49:41.199437   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.202031   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.202440   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.202478   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.202640   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:41.202869   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.203019   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:41.203131   21496 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa Username:docker}
	I0919 16:49:41.289354   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 16:49:41.289431   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 16:49:41.312761   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 16:49:41.312831   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0919 16:49:41.335495   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 16:49:41.335554   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 16:49:41.357971   21496 provision.go:86] duration metric: configureAuth took 332.634839ms
	I0919 16:49:41.357998   21496 buildroot.go:189] setting minikube options for container-runtime
	I0919 16:49:41.358199   21496 config.go:182] Loaded profile config "ingress-addon-legacy-845293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0919 16:49:41.358283   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.360798   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.361116   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.361153   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.361340   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:41.361521   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.361686   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.361797   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:41.361941   21496 main.go:141] libmachine: Using SSH client type: native
	I0919 16:49:41.362261   21496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0919 16:49:41.362276   21496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 16:49:41.658491   21496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 16:49:41.658516   21496 main.go:141] libmachine: Checking connection to Docker...
	I0919 16:49:41.658525   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetURL
	I0919 16:49:41.659615   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Using libvirt version 6000000
	I0919 16:49:41.661690   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.661990   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.662030   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.662150   21496 main.go:141] libmachine: Docker is up and running!
	I0919 16:49:41.662160   21496 main.go:141] libmachine: Reticulating splines...
	I0919 16:49:41.662165   21496 client.go:171] LocalClient.Create took 24.127408061s
	I0919 16:49:41.662187   21496 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-845293" took 24.127466756s
	I0919 16:49:41.662200   21496 start.go:300] post-start starting for "ingress-addon-legacy-845293" (driver="kvm2")
	I0919 16:49:41.662215   21496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 16:49:41.662245   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:41.662466   21496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 16:49:41.662487   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.664310   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.664689   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.664726   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.664813   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:41.664984   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.665244   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:41.665383   21496 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa Username:docker}
	I0919 16:49:41.750710   21496 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 16:49:41.755069   21496 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 16:49:41.755093   21496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 16:49:41.755165   21496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 16:49:41.755285   21496 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 16:49:41.755302   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /etc/ssl/certs/132392.pem
	I0919 16:49:41.755408   21496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 16:49:41.764509   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 16:49:41.786549   21496 start.go:303] post-start completed in 124.332064ms
	I0919 16:49:41.786595   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetConfigRaw
	I0919 16:49:41.787215   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetIP
	I0919 16:49:41.789565   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.789891   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.789914   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.790172   21496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/config.json ...
	I0919 16:49:41.790382   21496 start.go:128] duration metric: createHost completed in 24.273943311s
	I0919 16:49:41.790411   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.792598   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.792899   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.792927   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.793084   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:41.793250   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.793394   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.793540   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:41.793698   21496 main.go:141] libmachine: Using SSH client type: native
	I0919 16:49:41.794059   21496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0919 16:49:41.794079   21496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 16:49:41.901076   21496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142181.882319647
	
	I0919 16:49:41.901095   21496 fix.go:206] guest clock: 1695142181.882319647
	I0919 16:49:41.901108   21496 fix.go:219] Guest: 2023-09-19 16:49:41.882319647 +0000 UTC Remote: 2023-09-19 16:49:41.790396706 +0000 UTC m=+44.859778141 (delta=91.922941ms)
	I0919 16:49:41.901151   21496 fix.go:190] guest clock delta is within tolerance: 91.922941ms
	I0919 16:49:41.901157   21496 start.go:83] releasing machines lock for "ingress-addon-legacy-845293", held for 24.384829651s
	I0919 16:49:41.901181   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:41.901448   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetIP
	I0919 16:49:41.903761   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.904125   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.904158   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.904223   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:41.904713   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:41.904910   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:49:41.905000   21496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 16:49:41.905034   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.905147   21496 ssh_runner.go:195] Run: cat /version.json
	I0919 16:49:41.905168   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:49:41.907334   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.907651   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.907704   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.907738   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.907765   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:41.907926   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.908082   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:41.908122   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:41.908154   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:41.908220   21496 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa Username:docker}
	I0919 16:49:41.908282   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:49:41.908423   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:49:41.908596   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:49:41.908726   21496 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa Username:docker}
	I0919 16:49:41.985057   21496 ssh_runner.go:195] Run: systemctl --version
	I0919 16:49:42.009171   21496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 16:49:42.171142   21496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 16:49:42.176737   21496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 16:49:42.176792   21496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 16:49:42.191470   21496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 16:49:42.191491   21496 start.go:469] detecting cgroup driver to use...
	I0919 16:49:42.191548   21496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 16:49:42.203880   21496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:49:42.215050   21496 docker.go:196] disabling cri-docker service (if available) ...
	I0919 16:49:42.215094   21496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 16:49:42.226439   21496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 16:49:42.238139   21496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 16:49:42.340602   21496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 16:49:42.457165   21496 docker.go:212] disabling docker service ...
	I0919 16:49:42.457232   21496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 16:49:42.471478   21496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 16:49:42.483374   21496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 16:49:42.581440   21496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 16:49:42.680815   21496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 16:49:42.693999   21496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:49:42.710895   21496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0919 16:49:42.710956   21496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:49:42.720833   21496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 16:49:42.720896   21496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:49:42.730698   21496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:49:42.740880   21496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:49:42.750970   21496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 16:49:42.761125   21496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 16:49:42.769907   21496 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 16:49:42.769959   21496 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 16:49:42.783758   21496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 16:49:42.792959   21496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:49:42.893503   21496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 16:49:43.052028   21496 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 16:49:43.052097   21496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 16:49:43.057034   21496 start.go:537] Will wait 60s for crictl version
	I0919 16:49:43.057079   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:43.060938   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 16:49:43.098225   21496 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 16:49:43.098307   21496 ssh_runner.go:195] Run: crio --version
	I0919 16:49:43.143921   21496 ssh_runner.go:195] Run: crio --version
	I0919 16:49:43.198716   21496 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0919 16:49:43.200266   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetIP
	I0919 16:49:43.202766   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:43.203118   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:49:43.203152   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:49:43.203374   21496 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 16:49:43.207885   21496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:49:43.220676   21496 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0919 16:49:43.220740   21496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 16:49:43.257083   21496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0919 16:49:43.257163   21496 ssh_runner.go:195] Run: which lz4
	I0919 16:49:43.261256   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 16:49:43.261372   21496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 16:49:43.265612   21496 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 16:49:43.265638   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0919 16:49:45.206737   21496 crio.go:444] Took 1.945404 seconds to copy over tarball
	I0919 16:49:45.206790   21496 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 16:49:48.430550   21496 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.22373079s)
	I0919 16:49:48.430577   21496 crio.go:451] Took 3.223821 seconds to extract the tarball
	I0919 16:49:48.430587   21496 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 16:49:48.474934   21496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 16:49:48.528624   21496 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0919 16:49:48.528647   21496 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 16:49:48.528699   21496 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0919 16:49:48.528734   21496 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0919 16:49:48.528757   21496 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0919 16:49:48.528763   21496 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0919 16:49:48.528800   21496 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 16:49:48.528844   21496 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0919 16:49:48.528691   21496 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:49:48.528930   21496 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0919 16:49:48.529901   21496 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0919 16:49:48.529901   21496 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 16:49:48.529910   21496 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0919 16:49:48.529912   21496 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:49:48.529924   21496 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0919 16:49:48.529913   21496 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0919 16:49:48.529968   21496 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0919 16:49:48.529971   21496 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0919 16:49:48.711996   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0919 16:49:48.726297   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0919 16:49:48.756866   21496 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0919 16:49:48.756907   21496 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0919 16:49:48.756951   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.772326   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0919 16:49:48.772371   21496 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0919 16:49:48.772421   21496 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0919 16:49:48.772462   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.812007   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0919 16:49:48.812247   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0919 16:49:48.812311   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0919 16:49:48.821224   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0919 16:49:48.824048   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 16:49:48.857857   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0919 16:49:48.866591   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0919 16:49:48.912931   21496 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0919 16:49:48.912980   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0919 16:49:48.912982   21496 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0919 16:49:48.913019   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.932518   21496 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0919 16:49:48.932560   21496 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0919 16:49:48.932605   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.938196   21496 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0919 16:49:48.938235   21496 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 16:49:48.938282   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.950342   21496 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0919 16:49:48.950380   21496 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0919 16:49:48.950422   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.969344   21496 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0919 16:49:48.969396   21496 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0919 16:49:48.969435   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0919 16:49:48.969462   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0919 16:49:48.969477   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0919 16:49:48.969437   21496 ssh_runner.go:195] Run: which crictl
	I0919 16:49:48.969515   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0919 16:49:49.062246   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0919 16:49:49.062291   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0919 16:49:49.062352   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0919 16:49:49.062420   21496 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0919 16:49:49.062468   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0919 16:49:49.100744   21496 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0919 16:49:49.526595   21496 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:49:49.667178   21496 cache_images.go:92] LoadImages completed in 1.138513809s
	W0919 16:49:49.667255   21496 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0919 16:49:49.667329   21496 ssh_runner.go:195] Run: crio config
	I0919 16:49:49.730046   21496 cni.go:84] Creating CNI manager for ""
	I0919 16:49:49.730070   21496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:49:49.730090   21496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 16:49:49.730114   21496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-845293 NodeName:ingress-addon-legacy-845293 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 16:49:49.730252   21496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-845293"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 16:49:49.730325   21496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-845293 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 16:49:49.730370   21496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0919 16:49:49.739237   21496 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 16:49:49.739301   21496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 16:49:49.747798   21496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0919 16:49:49.763027   21496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0919 16:49:49.778601   21496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0919 16:49:49.793829   21496 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I0919 16:49:49.797604   21496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:49:49.809788   21496 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293 for IP: 192.168.39.244
	I0919 16:49:49.809829   21496 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:49.809986   21496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 16:49:49.810035   21496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 16:49:49.810087   21496 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.key
	I0919 16:49:49.810102   21496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt with IP's: []
	I0919 16:49:49.997258   21496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt ...
	I0919 16:49:49.997292   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: {Name:mkf6381acdd899e783d79e01d09ec473e6481280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:49.997459   21496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.key ...
	I0919 16:49:49.997471   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.key: {Name:mkc926e8c9961e943c8447e0760eb3a0909eaeea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:49.997545   21496 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key.79850b64
	I0919 16:49:49.997562   21496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt.79850b64 with IP's: [192.168.39.244 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 16:49:50.165375   21496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt.79850b64 ...
	I0919 16:49:50.165403   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt.79850b64: {Name:mke5a624618fd5663ac33ab56c736d8c8aa3c84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:50.165548   21496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key.79850b64 ...
	I0919 16:49:50.165559   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key.79850b64: {Name:mka9ae738cb04dfbe85e4ce895c8dd37252de43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:50.165618   21496 certs.go:337] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt.79850b64 -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt
	I0919 16:49:50.165681   21496 certs.go:341] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key.79850b64 -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key
	I0919 16:49:50.165730   21496 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.key
	I0919 16:49:50.165742   21496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.crt with IP's: []
	I0919 16:49:50.375463   21496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.crt ...
	I0919 16:49:50.375490   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.crt: {Name:mk78662dc1077aef9158e40acde1271946f082c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:50.375641   21496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.key ...
	I0919 16:49:50.375653   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.key: {Name:mkec40b071effae751f4e5c0b257f8ed3cafe484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:49:50.375720   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 16:49:50.375740   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 16:49:50.375752   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 16:49:50.375764   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 16:49:50.375774   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 16:49:50.375785   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 16:49:50.375795   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 16:49:50.375810   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 16:49:50.375858   21496 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 16:49:50.375897   21496 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 16:49:50.375907   21496 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 16:49:50.375932   21496 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 16:49:50.375953   21496 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 16:49:50.375976   21496 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 16:49:50.376015   21496 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 16:49:50.376041   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem -> /usr/share/ca-certificates/13239.pem
	I0919 16:49:50.376053   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /usr/share/ca-certificates/132392.pem
	I0919 16:49:50.376065   21496 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:49:50.376654   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 16:49:50.402031   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 16:49:50.423654   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 16:49:50.445886   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 16:49:50.467049   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 16:49:50.488798   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 16:49:50.510440   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 16:49:50.529957   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 16:49:50.551600   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 16:49:50.573025   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 16:49:50.594720   21496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 16:49:50.616155   21496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 16:49:50.631182   21496 ssh_runner.go:195] Run: openssl version
	I0919 16:49:50.636522   21496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 16:49:50.645850   21496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 16:49:50.650137   21496 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 16:49:50.650191   21496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 16:49:50.655549   21496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 16:49:50.664645   21496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 16:49:50.673715   21496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:49:50.678035   21496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:49:50.678074   21496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:49:50.682988   21496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 16:49:50.692099   21496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 16:49:50.701267   21496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 16:49:50.705695   21496 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 16:49:50.705738   21496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 16:49:50.710963   21496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 16:49:50.719947   21496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 16:49:50.723685   21496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:49:50.723760   21496 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-845293 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-845293 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:49:50.723836   21496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 16:49:50.723876   21496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 16:49:50.762248   21496 cri.go:89] found id: ""
	I0919 16:49:50.762319   21496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 16:49:50.770906   21496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 16:49:50.779202   21496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 16:49:50.787182   21496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 16:49:50.787225   21496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 16:49:50.853456   21496 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0919 16:49:50.853550   21496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 16:49:50.988597   21496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 16:49:50.988751   21496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 16:49:50.988854   21496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 16:49:51.207651   21496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:49:51.207802   21496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:49:51.207866   21496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 16:49:51.334135   21496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 16:49:51.336902   21496 out.go:204]   - Generating certificates and keys ...
	I0919 16:49:51.337002   21496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 16:49:51.337095   21496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 16:49:51.590748   21496 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 16:49:51.786726   21496 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 16:49:51.938704   21496 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 16:49:52.172649   21496 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 16:49:52.307126   21496 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 16:49:52.307369   21496 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-845293 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0919 16:49:52.443754   21496 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 16:49:52.443905   21496 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-845293 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0919 16:49:52.555535   21496 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 16:49:52.630914   21496 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 16:49:52.894115   21496 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 16:49:52.894442   21496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 16:49:53.036985   21496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 16:49:53.283684   21496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 16:49:53.631375   21496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 16:49:53.992241   21496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 16:49:53.993151   21496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 16:49:53.995108   21496 out.go:204]   - Booting up control plane ...
	I0919 16:49:53.995212   21496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 16:49:54.012621   21496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 16:49:54.014049   21496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 16:49:54.015474   21496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 16:49:54.018867   21496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 16:50:02.521415   21496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503160 seconds
	I0919 16:50:02.521553   21496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 16:50:02.539393   21496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 16:50:03.068901   21496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 16:50:03.069039   21496 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-845293 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 16:50:03.577336   21496 kubeadm.go:322] [bootstrap-token] Using token: aeeb4y.fv00am534tihqzp5
	I0919 16:50:03.579546   21496 out.go:204]   - Configuring RBAC rules ...
	I0919 16:50:03.579687   21496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 16:50:03.586055   21496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 16:50:03.595691   21496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 16:50:03.599148   21496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 16:50:03.602845   21496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 16:50:03.608380   21496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 16:50:03.620541   21496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 16:50:03.964182   21496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 16:50:04.068881   21496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 16:50:04.068898   21496 kubeadm.go:322] 
	I0919 16:50:04.068950   21496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 16:50:04.068956   21496 kubeadm.go:322] 
	I0919 16:50:04.069032   21496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 16:50:04.069044   21496 kubeadm.go:322] 
	I0919 16:50:04.069072   21496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 16:50:04.069141   21496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 16:50:04.069213   21496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 16:50:04.069224   21496 kubeadm.go:322] 
	I0919 16:50:04.069297   21496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 16:50:04.069421   21496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 16:50:04.069525   21496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 16:50:04.069535   21496 kubeadm.go:322] 
	I0919 16:50:04.069648   21496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 16:50:04.069762   21496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 16:50:04.069774   21496 kubeadm.go:322] 
	I0919 16:50:04.069870   21496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token aeeb4y.fv00am534tihqzp5 \
	I0919 16:50:04.069990   21496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 16:50:04.070014   21496 kubeadm.go:322]     --control-plane 
	I0919 16:50:04.070018   21496 kubeadm.go:322] 
	I0919 16:50:04.070127   21496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 16:50:04.070138   21496 kubeadm.go:322] 
	I0919 16:50:04.070252   21496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token aeeb4y.fv00am534tihqzp5 \
	I0919 16:50:04.070342   21496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 16:50:04.070789   21496 kubeadm.go:322] W0919 16:49:50.846257     957 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0919 16:50:04.070962   21496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:50:04.071106   21496 kubeadm.go:322] W0919 16:49:54.007293     957 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0919 16:50:04.071279   21496 kubeadm.go:322] W0919 16:49:54.008934     957 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0919 16:50:04.071381   21496 cni.go:84] Creating CNI manager for ""
	I0919 16:50:04.071398   21496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:50:04.073996   21496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 16:50:04.075285   21496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 16:50:04.084229   21496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 16:50:04.104996   21496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 16:50:04.105069   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:04.105077   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=ingress-addon-legacy-845293 minikube.k8s.io/updated_at=2023_09_19T16_50_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:04.136692   21496 ops.go:34] apiserver oom_adj: -16
	I0919 16:50:04.437722   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:04.578229   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:05.217668   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:05.717495   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:06.218208   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:06.717491   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:07.217931   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:07.717707   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:08.218395   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:08.718159   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:09.217374   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:09.717397   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:10.218125   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:10.718067   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:11.217861   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:11.717532   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:12.217809   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:12.717717   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:13.217985   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:13.718040   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:14.217369   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:14.717384   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:15.217579   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:15.717630   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:16.218158   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:16.718248   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:17.217880   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:17.718219   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:18.217731   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:18.717703   21496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:50:19.030388   21496 kubeadm.go:1081] duration metric: took 14.925376532s to wait for elevateKubeSystemPrivileges.
	I0919 16:50:19.030425   21496 kubeadm.go:406] StartCluster complete in 28.306690698s
	I0919 16:50:19.030446   21496 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:50:19.030536   21496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:50:19.031513   21496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:50:19.031744   21496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 16:50:19.031780   21496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 16:50:19.031865   21496 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-845293"
	I0919 16:50:19.031874   21496 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-845293"
	I0919 16:50:19.031887   21496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-845293"
	I0919 16:50:19.031887   21496 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-845293"
	I0919 16:50:19.031929   21496 host.go:66] Checking if "ingress-addon-legacy-845293" exists ...
	I0919 16:50:19.031966   21496 config.go:182] Loaded profile config "ingress-addon-legacy-845293": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0919 16:50:19.032339   21496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:50:19.032375   21496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:50:19.032444   21496 kapi.go:59] client config for ingress-addon-legacy-845293: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:50:19.032733   21496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:50:19.032764   21496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:50:19.033253   21496 cert_rotation.go:137] Starting client certificate rotation controller
	I0919 16:50:19.047216   21496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I0919 16:50:19.047459   21496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39169
	I0919 16:50:19.047688   21496 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:50:19.047809   21496 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:50:19.048212   21496 main.go:141] libmachine: Using API Version  1
	I0919 16:50:19.048239   21496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:50:19.048310   21496 main.go:141] libmachine: Using API Version  1
	I0919 16:50:19.048330   21496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:50:19.048602   21496 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:50:19.048648   21496 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:50:19.048792   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetState
	I0919 16:50:19.049212   21496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:50:19.049246   21496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:50:19.050901   21496 kapi.go:59] client config for ingress-addon-legacy-845293: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:50:19.055589   21496 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-845293"
	I0919 16:50:19.055631   21496 host.go:66] Checking if "ingress-addon-legacy-845293" exists ...
	I0919 16:50:19.055888   21496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:50:19.055917   21496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:50:19.064366   21496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0919 16:50:19.064798   21496 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:50:19.065421   21496 main.go:141] libmachine: Using API Version  1
	I0919 16:50:19.065446   21496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:50:19.065810   21496 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:50:19.065999   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetState
	I0919 16:50:19.067529   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:50:19.069623   21496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:50:19.071029   21496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:50:19.071043   21496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 16:50:19.071057   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:50:19.070265   21496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I0919 16:50:19.071440   21496 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:50:19.071870   21496 main.go:141] libmachine: Using API Version  1
	I0919 16:50:19.071892   21496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:50:19.072239   21496 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:50:19.072887   21496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:50:19.072926   21496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:50:19.074474   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:50:19.074952   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:50:19.074991   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:50:19.075168   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:50:19.075342   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:50:19.075519   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:50:19.075695   21496 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa Username:docker}
	I0919 16:50:19.087118   21496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0919 16:50:19.087495   21496 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:50:19.087979   21496 main.go:141] libmachine: Using API Version  1
	I0919 16:50:19.088006   21496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:50:19.088308   21496 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:50:19.088484   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetState
	I0919 16:50:19.089955   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .DriverName
	I0919 16:50:19.090182   21496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 16:50:19.090200   21496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 16:50:19.090216   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHHostname
	I0919 16:50:19.092914   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:50:19.093324   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:73:d3", ip: ""} in network mk-ingress-addon-legacy-845293: {Iface:virbr1 ExpiryTime:2023-09-19 17:49:33 +0000 UTC Type:0 Mac:52:54:00:f3:73:d3 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-845293 Clientid:01:52:54:00:f3:73:d3}
	I0919 16:50:19.093340   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | domain ingress-addon-legacy-845293 has defined IP address 192.168.39.244 and MAC address 52:54:00:f3:73:d3 in network mk-ingress-addon-legacy-845293
	I0919 16:50:19.093545   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHPort
	I0919 16:50:19.093681   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHKeyPath
	I0919 16:50:19.093814   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .GetSSHUsername
	I0919 16:50:19.093957   21496 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/ingress-addon-legacy-845293/id_rsa Username:docker}
	I0919 16:50:19.107399   21496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-845293" context rescaled to 1 replicas
	I0919 16:50:19.107431   21496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 16:50:19.109244   21496 out.go:177] * Verifying Kubernetes components...
	I0919 16:50:19.110762   21496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:50:19.294657   21496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 16:50:19.318571   21496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:50:19.508250   21496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 16:50:19.508869   21496 kapi.go:59] client config for ingress-addon-legacy-845293: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:50:19.509218   21496 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-845293" to be "Ready" ...
	I0919 16:50:19.530414   21496 node_ready.go:49] node "ingress-addon-legacy-845293" has status "Ready":"True"
	I0919 16:50:19.530450   21496 node_ready.go:38] duration metric: took 21.209279ms waiting for node "ingress-addon-legacy-845293" to be "Ready" ...
	I0919 16:50:19.530465   21496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:50:19.944403   21496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-bsghp" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:20.111116   21496 main.go:141] libmachine: Making call to close driver server
	I0919 16:50:20.111141   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .Close
	I0919 16:50:20.111427   21496 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:50:20.111457   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Closing plugin on server side
	I0919 16:50:20.111475   21496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:50:20.111516   21496 main.go:141] libmachine: Making call to close driver server
	I0919 16:50:20.111530   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .Close
	I0919 16:50:20.111755   21496 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:50:20.111772   21496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:50:20.111785   21496 main.go:141] libmachine: Making call to close driver server
	I0919 16:50:20.111795   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .Close
	I0919 16:50:20.111993   21496 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:50:20.113364   21496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:50:20.112006   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Closing plugin on server side
	I0919 16:50:20.209899   21496 main.go:141] libmachine: Making call to close driver server
	I0919 16:50:20.209937   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .Close
	I0919 16:50:20.209939   21496 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 16:50:20.210201   21496 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:50:20.210219   21496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:50:20.210224   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Closing plugin on server side
	I0919 16:50:20.210229   21496 main.go:141] libmachine: Making call to close driver server
	I0919 16:50:20.210294   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) Calling .Close
	I0919 16:50:20.210514   21496 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:50:20.210530   21496 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:50:20.210536   21496 main.go:141] libmachine: (ingress-addon-legacy-845293) DBG | Closing plugin on server side
	I0919 16:50:20.212675   21496 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0919 16:50:20.214015   21496 addons.go:502] enable addons completed in 1.182232114s: enabled=[default-storageclass storage-provisioner]
	I0919 16:50:22.123144   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:24.621516   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:27.121719   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:29.622492   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:32.124996   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:34.622332   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:37.121061   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:39.122966   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:41.620558   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:43.621917   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:45.626185   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:48.122574   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:50.122783   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:52.623130   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:55.122463   21496 pod_ready.go:102] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"False"
	I0919 16:50:55.624226   21496 pod_ready.go:92] pod "coredns-66bff467f8-bsghp" in "kube-system" namespace has status "Ready":"True"
	I0919 16:50:55.624256   21496 pod_ready.go:81] duration metric: took 35.679803302s waiting for pod "coredns-66bff467f8-bsghp" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.624269   21496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-tfkz9" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.625842   21496 pod_ready.go:97] error getting pod "coredns-66bff467f8-tfkz9" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-tfkz9" not found
	I0919 16:50:55.625861   21496 pod_ready.go:81] duration metric: took 1.585767ms waiting for pod "coredns-66bff467f8-tfkz9" in "kube-system" namespace to be "Ready" ...
	E0919 16:50:55.625870   21496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-tfkz9" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-tfkz9" not found
	I0919 16:50:55.625876   21496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.632251   21496 pod_ready.go:92] pod "etcd-ingress-addon-legacy-845293" in "kube-system" namespace has status "Ready":"True"
	I0919 16:50:55.632269   21496 pod_ready.go:81] duration metric: took 6.385917ms waiting for pod "etcd-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.632277   21496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.638075   21496 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-845293" in "kube-system" namespace has status "Ready":"True"
	I0919 16:50:55.638089   21496 pod_ready.go:81] duration metric: took 5.805879ms waiting for pod "kube-apiserver-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.638097   21496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.647298   21496 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-845293" in "kube-system" namespace has status "Ready":"True"
	I0919 16:50:55.647317   21496 pod_ready.go:81] duration metric: took 9.213243ms waiting for pod "kube-controller-manager-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.647328   21496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nbv66" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.816549   21496 request.go:629] Waited for 166.326236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ingress-addon-legacy-845293
	I0919 16:50:55.821074   21496 pod_ready.go:92] pod "kube-proxy-nbv66" in "kube-system" namespace has status "Ready":"True"
	I0919 16:50:55.821093   21496 pod_ready.go:81] duration metric: took 173.757781ms waiting for pod "kube-proxy-nbv66" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:55.821102   21496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:56.016487   21496 request.go:629] Waited for 195.327764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-845293
	I0919 16:50:56.216268   21496 request.go:629] Waited for 196.146755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ingress-addon-legacy-845293
	I0919 16:50:56.221066   21496 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-845293" in "kube-system" namespace has status "Ready":"True"
	I0919 16:50:56.221089   21496 pod_ready.go:81] duration metric: took 399.978622ms waiting for pod "kube-scheduler-ingress-addon-legacy-845293" in "kube-system" namespace to be "Ready" ...
	I0919 16:50:56.221108   21496 pod_ready.go:38] duration metric: took 36.690613733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:50:56.221131   21496 api_server.go:52] waiting for apiserver process to appear ...
	I0919 16:50:56.221186   21496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 16:50:56.233959   21496 api_server.go:72] duration metric: took 37.126503129s to wait for apiserver process to appear ...
	I0919 16:50:56.233975   21496 api_server.go:88] waiting for apiserver healthz status ...
	I0919 16:50:56.233988   21496 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0919 16:50:56.239868   21496 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0919 16:50:56.240955   21496 api_server.go:141] control plane version: v1.18.20
	I0919 16:50:56.240975   21496 api_server.go:131] duration metric: took 6.994271ms to wait for apiserver health ...
	I0919 16:50:56.240982   21496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 16:50:56.416439   21496 request.go:629] Waited for 175.368918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0919 16:50:56.422389   21496 system_pods.go:59] 7 kube-system pods found
	I0919 16:50:56.422414   21496 system_pods.go:61] "coredns-66bff467f8-bsghp" [2672543d-94d8-4929-a675-5bef7e7a88cc] Running
	I0919 16:50:56.422419   21496 system_pods.go:61] "etcd-ingress-addon-legacy-845293" [ae6901a7-4597-4aaa-83ef-52e45ef5ff05] Running
	I0919 16:50:56.422423   21496 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-845293" [77e5c8a2-90e8-4825-8750-29b017e251ed] Running
	I0919 16:50:56.422427   21496 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-845293" [fee573e5-7082-4729-9cea-df83ac21ec15] Running
	I0919 16:50:56.422431   21496 system_pods.go:61] "kube-proxy-nbv66" [3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c] Running
	I0919 16:50:56.422437   21496 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-845293" [b9cda9e4-6a75-4b7b-aefb-a1e03c29bc0c] Running
	I0919 16:50:56.422441   21496 system_pods.go:61] "storage-provisioner" [98ce6002-3669-44ba-815b-882fe4a8fb80] Running
	I0919 16:50:56.422449   21496 system_pods.go:74] duration metric: took 181.460051ms to wait for pod list to return data ...
	I0919 16:50:56.422459   21496 default_sa.go:34] waiting for default service account to be created ...
	I0919 16:50:56.615833   21496 request.go:629] Waited for 193.297874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0919 16:50:56.618914   21496 default_sa.go:45] found service account: "default"
	I0919 16:50:56.618932   21496 default_sa.go:55] duration metric: took 196.467884ms for default service account to be created ...
	I0919 16:50:56.618940   21496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 16:50:56.816338   21496 request.go:629] Waited for 197.340692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0919 16:50:56.823888   21496 system_pods.go:86] 7 kube-system pods found
	I0919 16:50:56.823911   21496 system_pods.go:89] "coredns-66bff467f8-bsghp" [2672543d-94d8-4929-a675-5bef7e7a88cc] Running
	I0919 16:50:56.823916   21496 system_pods.go:89] "etcd-ingress-addon-legacy-845293" [ae6901a7-4597-4aaa-83ef-52e45ef5ff05] Running
	I0919 16:50:56.823921   21496 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-845293" [77e5c8a2-90e8-4825-8750-29b017e251ed] Running
	I0919 16:50:56.823925   21496 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-845293" [fee573e5-7082-4729-9cea-df83ac21ec15] Running
	I0919 16:50:56.823929   21496 system_pods.go:89] "kube-proxy-nbv66" [3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c] Running
	I0919 16:50:56.823932   21496 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-845293" [b9cda9e4-6a75-4b7b-aefb-a1e03c29bc0c] Running
	I0919 16:50:56.823936   21496 system_pods.go:89] "storage-provisioner" [98ce6002-3669-44ba-815b-882fe4a8fb80] Running
	I0919 16:50:56.823942   21496 system_pods.go:126] duration metric: took 204.996815ms to wait for k8s-apps to be running ...
	I0919 16:50:56.823952   21496 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 16:50:56.823996   21496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:50:56.836843   21496 system_svc.go:56] duration metric: took 12.885703ms WaitForService to wait for kubelet.
	I0919 16:50:56.836863   21496 kubeadm.go:581] duration metric: took 37.729411323s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 16:50:56.836878   21496 node_conditions.go:102] verifying NodePressure condition ...
	I0919 16:50:57.016465   21496 request.go:629] Waited for 179.518539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes
	I0919 16:50:57.020015   21496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 16:50:57.020043   21496 node_conditions.go:123] node cpu capacity is 2
	I0919 16:50:57.020055   21496 node_conditions.go:105] duration metric: took 183.172066ms to run NodePressure ...
	I0919 16:50:57.020068   21496 start.go:228] waiting for startup goroutines ...
	I0919 16:50:57.020076   21496 start.go:233] waiting for cluster config update ...
	I0919 16:50:57.020097   21496 start.go:242] writing updated cluster config ...
	I0919 16:50:57.020357   21496 ssh_runner.go:195] Run: rm -f paused
	I0919 16:50:57.064735   21496 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0919 16:50:57.066585   21496 out.go:177] 
	W0919 16:50:57.068117   21496 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0919 16:50:57.069570   21496 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0919 16:50:57.070954   21496 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-845293" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 16:49:29 UTC, ends at Tue 2023-09-19 16:54:02 UTC. --
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.754133007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=80381ec7-357c-4921-97a1-d1613e431ee5 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.756170773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e655c8d6-fd10-4ad2-bcd1-39b5c2d29dca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.756705404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142442756683179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202350,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=e655c8d6-fd10-4ad2-bcd1-39b5c2d29dca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.757185834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=130fb4df-97a8-4eb3-aead-f5f2fe502437 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.757263337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=130fb4df-97a8-4eb3-aead-f5f2fe502437 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.757533605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7982d7abf9172e490ad053002eaa8d13ff6840373216d33770f1522273c61d23,PodSandboxId:6c71066717626cdede56acc8d51ed4266eacff967e61b7990d8f73cefff071b9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695142432918273342,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kp7zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: feb3e65b-1b48-49f8-8d99-b9a7ec9df842,},Annotations:map[string]string{io.kubernetes.container.hash: df44b1ff,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b257b0ffcf6c738bbc1afdf09f4ba44f00ca83a83e3ce66d14e60c80b630f38,PodSandboxId:a02c93e290f678f7df92013eda24875d45c9244c40fb356c8fdb91e2a079687e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695142292352928989,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9868ceb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a59c8e97896791c81d062c578d6b9cb3cfe4931f836a21a6a5a9a39839cc1b0,PodSandboxId:4705cf7b5056310e9f58f7afd05128590917bfd2c6f801b60d077571488de1bb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1695142272893803194,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-fj855,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2,},Annotations:map[string]string{io.kubernetes.container.hash: a5fab41a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:61130b25b1d79af340bda3ea9c1e473c6ae525056dcb6ae2cb31dfdc4c7903fe,PodSandboxId:3b937b5521f9c7680c90dadce46f3f7d36597db307e57c23d04aadd524d26a13,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142263788458176,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qqb9q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74bd4d56-564b-4c1c-96f2-a9c1ed2a6253,},Annotations:map[string]string{io.kubernetes.container.hash: abd3666e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255a779968e7ebf59c9377e4347610571bf4251d5df70e59c4bdf881f97b1074,PodSandboxId:1134c16ad7b489969830048b43b9a76f7be03af75454d66e07a09cc8aa7da352,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142262615821989,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9kpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c50039-1f9c-4971-a34b-56e1db7e0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 3a0fe64e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2d00ad6018379db86b11cb997e45728edbe730b6000909f983601c878bcc02,PodSandboxId:a3cbe1b2db486a90c40e692efb1378c989116672c4aa4318634272c198c8be8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142220981201201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ce6002-3669-44ba-815b-882fe4a8fb80,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9c7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eff8a6f87c8c6bb60a2bb14c66de91d9e80ed95a5a756eeb78e542121f426fe,PodSandboxId:41699ce8a68a131bd4b9776bd63146cc42643c954d5893f55ede593a5fd537ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1695142220286446074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbv66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c,},Annotations:map[string]string{io.kubernetes.container.hash: cbe4bc45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726871d5c0858b65dcc761a2f269d04f09d3989f3268c9aa9fc6d8a70903d163,PodSandboxId:53652bb791460404627fd38defb7b3909014f1c75a4c0a9a1a999f4f710af474,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1695142219659164832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-bsghp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2672543d-94d8-4929-a675-5bef7e7a88cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3643109,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9ac163b2927f69e2a2abb86b75042e4caa052dec2f4df01a1029c845123841,PodSa
ndboxId:047fa0b569a592fc1e32c751dc50b6dc61e022778def2907aa60e0561ce96a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1695142197035812364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f140f9ee86e6c48678b8a7ba91a2d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 64fa79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d0c8cc9d972257ab4bba65e79a67f48d1e20d9a36dcedba70012c7f04a36a00,PodSandboxId:8624016f639aaf4c8e4951ab0147800a842d0064
e05beacb2c1f53bb50501c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1695142195714799705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef805837e1ed2232a26f06444da1f9c236af68fabbbbeb83cf0d9339fc49de32,PodSandboxId:bd801f7050db4cbaeab1ba0fba097ce30423e589b63644
e75c7e4ba906ff2f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1695142195620426249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff1112eff53f9366557dc2a346e85947d6dcf9aed87bbc3b382e6de27026aff,PodSandboxId:b2c1bf6a673dfbc2
02d16fbffa8b56fd5bfbf4b113c868fcfe3be4f85c82d3ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1695142195412169886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=130fb4df-97a8-4eb3-aead-f5f2fe502437 name=/runtime.v1.RuntimeService
/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.795247481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e5fba388-ac04-498e-8f6c-c0e29900aef3 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.795329649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e5fba388-ac04-498e-8f6c-c0e29900aef3 name=/runtime.v1.RuntimeService/Version
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.796816424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=57970f64-65e5-4e49-9d8d-1a2975e2b845 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.797342013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142442797325227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202350,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=57970f64-65e5-4e49-9d8d-1a2975e2b845 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.798227191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fa04b564-1a8a-4a0c-a624-c69a238274eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.798292590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fa04b564-1a8a-4a0c-a624-c69a238274eb name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.798741017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7982d7abf9172e490ad053002eaa8d13ff6840373216d33770f1522273c61d23,PodSandboxId:6c71066717626cdede56acc8d51ed4266eacff967e61b7990d8f73cefff071b9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695142432918273342,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kp7zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: feb3e65b-1b48-49f8-8d99-b9a7ec9df842,},Annotations:map[string]string{io.kubernetes.container.hash: df44b1ff,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b257b0ffcf6c738bbc1afdf09f4ba44f00ca83a83e3ce66d14e60c80b630f38,PodSandboxId:a02c93e290f678f7df92013eda24875d45c9244c40fb356c8fdb91e2a079687e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695142292352928989,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9868ceb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a59c8e97896791c81d062c578d6b9cb3cfe4931f836a21a6a5a9a39839cc1b0,PodSandboxId:4705cf7b5056310e9f58f7afd05128590917bfd2c6f801b60d077571488de1bb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1695142272893803194,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-fj855,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2,},Annotations:map[string]string{io.kubernetes.container.hash: a5fab41a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:61130b25b1d79af340bda3ea9c1e473c6ae525056dcb6ae2cb31dfdc4c7903fe,PodSandboxId:3b937b5521f9c7680c90dadce46f3f7d36597db307e57c23d04aadd524d26a13,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142263788458176,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qqb9q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74bd4d56-564b-4c1c-96f2-a9c1ed2a6253,},Annotations:map[string]string{io.kubernetes.container.hash: abd3666e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255a779968e7ebf59c9377e4347610571bf4251d5df70e59c4bdf881f97b1074,PodSandboxId:1134c16ad7b489969830048b43b9a76f7be03af75454d66e07a09cc8aa7da352,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142262615821989,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9kpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c50039-1f9c-4971-a34b-56e1db7e0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 3a0fe64e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2d00ad6018379db86b11cb997e45728edbe730b6000909f983601c878bcc02,PodSandboxId:a3cbe1b2db486a90c40e692efb1378c989116672c4aa4318634272c198c8be8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142220981201201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ce6002-3669-44ba-815b-882fe4a8fb80,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9c7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eff8a6f87c8c6bb60a2bb14c66de91d9e80ed95a5a756eeb78e542121f426fe,PodSandboxId:41699ce8a68a131bd4b9776bd63146cc42643c954d5893f55ede593a5fd537ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1695142220286446074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbv66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c,},Annotations:map[string]string{io.kubernetes.container.hash: cbe4bc45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726871d5c0858b65dcc761a2f269d04f09d3989f3268c9aa9fc6d8a70903d163,PodSandboxId:53652bb791460404627fd38defb7b3909014f1c75a4c0a9a1a999f4f710af474,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1695142219659164832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-bsghp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2672543d-94d8-4929-a675-5bef7e7a88cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3643109,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9ac163b2927f69e2a2abb86b75042e4caa052dec2f4df01a1029c845123841,PodSa
ndboxId:047fa0b569a592fc1e32c751dc50b6dc61e022778def2907aa60e0561ce96a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1695142197035812364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f140f9ee86e6c48678b8a7ba91a2d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 64fa79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d0c8cc9d972257ab4bba65e79a67f48d1e20d9a36dcedba70012c7f04a36a00,PodSandboxId:8624016f639aaf4c8e4951ab0147800a842d0064
e05beacb2c1f53bb50501c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1695142195714799705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef805837e1ed2232a26f06444da1f9c236af68fabbbbeb83cf0d9339fc49de32,PodSandboxId:bd801f7050db4cbaeab1ba0fba097ce30423e589b63644
e75c7e4ba906ff2f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1695142195620426249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff1112eff53f9366557dc2a346e85947d6dcf9aed87bbc3b382e6de27026aff,PodSandboxId:b2c1bf6a673dfbc2
02d16fbffa8b56fd5bfbf4b113c868fcfe3be4f85c82d3ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1695142195412169886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fa04b564-1a8a-4a0c-a624-c69a238274eb name=/runtime.v1.RuntimeService
/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.837178585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c5efc18a-07e0-4b9c-b79c-cc741621f57e name=/runtime.v1.RuntimeService/Version
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.837260987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c5efc18a-07e0-4b9c-b79c-cc741621f57e name=/runtime.v1.RuntimeService/Version
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.838866811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6dc91987-94a6-4134-9886-5126e01bce42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.839349423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142442839336116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202350,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=6dc91987-94a6-4134-9886-5126e01bce42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.840071442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bed76f4b-1bde-45c0-b4c4-638ae05ddb11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.840116765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bed76f4b-1bde-45c0-b4c4-638ae05ddb11 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.842985852Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=4c10bb1d-4dd8-4b1c-a52f-b5a0e0e154a0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.843396948Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6c71066717626cdede56acc8d51ed4266eacff967e61b7990d8f73cefff071b9,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-kp7zr,Uid:feb3e65b-1b48-49f8-8d99-b9a7ec9df842,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142429288861107,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kp7zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: feb3e65b-1b48-49f8-8d99-b9a7ec9df842,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:53:48.936869268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a02c93e290f678f7df92013eda24875d45c9244c40fb356c8fdb91e2a079687e,Metadata:&PodSandboxMetadata{Name:nginx,Uid:7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142287472388883,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:51:27.125962627Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf82fceeb9aef7bbc7ab5c06f3bb606ddd20bba69615e91ea1978ecd2dec58c1,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:34462b69-ab9b-4430-8318-dcb2cb17f7a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1695142274407203327,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34462b69-ab9b-4430-8318-dcb2cb17f7a5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configura
tion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2023-09-19T16:51:14.051336523Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4705cf7b5056310e9f58f7afd05128590917bfd2c6f801b60d077571488de1bb,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-fj855,Uid:9d1f120b-5c6a-4613-8767
-4aaa6a6ee2a2,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1695142265696343727,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-fj855,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:50:57.857274631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b937b5521f9c7680c90dadce46f3f7d36597db307e57c23d04aadd524d26a13,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-qqb9q,Uid:74bd4d56-564b-4c1c-96f2-a9c1ed2a6253,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1695142258303720356,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/ins
tance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 939e3559-a6e9-460e-bb4c-400d945efa02,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-qqb9q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74bd4d56-564b-4c1c-96f2-a9c1ed2a6253,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:50:57.948742983Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1134c16ad7b489969830048b43b9a76f7be03af75454d66e07a09cc8aa7da352,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-s9kpc,Uid:e6c50039-1f9c-4971-a34b-56e1db7e0ca6,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1695142258256893592,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 23f50af4-a2a2-4bed-8f19-6c1d355f17e4,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: ingress-nginx-admission-create-s9kpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c50039-1f9c-4971-a34b-56e1db7e0ca6,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:50:57.908067913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3cbe1b2db486a90c40e692efb1378c989116672c4aa4318634272c198c8be8f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:98ce6002-3669-44ba-815b-882fe4a8fb80,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142220544748035,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ce6002-3669-44ba-815b-882fe4a8fb80,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-19T16:50:20.203296460Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53652bb791460404627fd38defb7b3909014f1c75a4c0a9a1a999f4f710af474,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-bsghp,Uid:2672543d-94d8-4929-a675-5bef7e7a88cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142219075228905,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-66bff467f8-bsghp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2672543d-94d8-4929-a675-5bef7e7a88cc,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:50:18.608835090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41699ce8a68a131bd4b9776bd63146cc42643c954d5893f55ede593a5fd537ae,Metadata:&PodSandboxMetadata{Name:kube-proxy-nbv66,Uid:3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142218676968211,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nbv66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T16:50:18.340967871Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:b2c1bf6a673dfbc202d16fbffa8b56fd5bfbf4b113c868fcfe3be4f85c82d3ab,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-legacy-845293,Uid:53abda991029f9e054ae37d0cc603b56,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142194969509002,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.244:8443,kubernetes.io/config.hash: 53abda991029f9e054ae37d0cc603b56,kubernetes.io/config.seen: 2023-09-19T16:49:54.032551049Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:047fa0b569a592fc1e32c751dc50b6dc61e022778def2907aa60e0561ce96a5c,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-845293,Uid:5f140f9ee86e6c48678b8a7ba9
1a2d0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142194949193023,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f140f9ee86e6c48678b8a7ba91a2d0a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.244:2379,kubernetes.io/config.hash: 5f140f9ee86e6c48678b8a7ba91a2d0a,kubernetes.io/config.seen: 2023-09-19T16:49:54.036922918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8624016f639aaf4c8e4951ab0147800a842d0064e05beacb2c1f53bb50501c52,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-845293,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142194912537862,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-
ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.io/config.seen: 2023-09-19T16:49:54.035313395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd801f7050db4cbaeab1ba0fba097ce30423e589b63644e75c7e4ba906ff2f3b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-845293,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695142194878458578,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernete
s.io/config.seen: 2023-09-19T16:49:54.034186671Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=4c10bb1d-4dd8-4b1c-a52f-b5a0e0e154a0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.845678952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7982d7abf9172e490ad053002eaa8d13ff6840373216d33770f1522273c61d23,PodSandboxId:6c71066717626cdede56acc8d51ed4266eacff967e61b7990d8f73cefff071b9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695142432918273342,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kp7zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: feb3e65b-1b48-49f8-8d99-b9a7ec9df842,},Annotations:map[string]string{io.kubernetes.container.hash: df44b1ff,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b257b0ffcf6c738bbc1afdf09f4ba44f00ca83a83e3ce66d14e60c80b630f38,PodSandboxId:a02c93e290f678f7df92013eda24875d45c9244c40fb356c8fdb91e2a079687e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695142292352928989,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9868ceb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a59c8e97896791c81d062c578d6b9cb3cfe4931f836a21a6a5a9a39839cc1b0,PodSandboxId:4705cf7b5056310e9f58f7afd05128590917bfd2c6f801b60d077571488de1bb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1695142272893803194,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-fj855,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2,},Annotations:map[string]string{io.kubernetes.container.hash: a5fab41a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:61130b25b1d79af340bda3ea9c1e473c6ae525056dcb6ae2cb31dfdc4c7903fe,PodSandboxId:3b937b5521f9c7680c90dadce46f3f7d36597db307e57c23d04aadd524d26a13,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142263788458176,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qqb9q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74bd4d56-564b-4c1c-96f2-a9c1ed2a6253,},Annotations:map[string]string{io.kubernetes.container.hash: abd3666e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255a779968e7ebf59c9377e4347610571bf4251d5df70e59c4bdf881f97b1074,PodSandboxId:1134c16ad7b489969830048b43b9a76f7be03af75454d66e07a09cc8aa7da352,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142262615821989,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9kpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c50039-1f9c-4971-a34b-56e1db7e0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 3a0fe64e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2d00ad6018379db86b11cb997e45728edbe730b6000909f983601c878bcc02,PodSandboxId:a3cbe1b2db486a90c40e692efb1378c989116672c4aa4318634272c198c8be8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142220981201201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ce6002-3669-44ba-815b-882fe4a8fb80,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9c7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eff8a6f87c8c6bb60a2bb14c66de91d9e80ed95a5a756eeb78e542121f426fe,PodSandboxId:41699ce8a68a131bd4b9776bd63146cc42643c954d5893f55ede593a5fd537ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1695142220286446074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbv66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c,},Annotations:map[string]string{io.kubernetes.container.hash: cbe4bc45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726871d5c0858b65dcc761a2f269d04f09d3989f3268c9aa9fc6d8a70903d163,PodSandboxId:53652bb791460404627fd38defb7b3909014f1c75a4c0a9a1a999f4f710af474,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1695142219659164832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-bsghp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2672543d-94d8-4929-a675-5bef7e7a88cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3643109,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9ac163b2927f69e2a2abb86b75042e4caa052dec2f4df01a1029c845123841,PodSa
ndboxId:047fa0b569a592fc1e32c751dc50b6dc61e022778def2907aa60e0561ce96a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1695142197035812364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f140f9ee86e6c48678b8a7ba91a2d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 64fa79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d0c8cc9d972257ab4bba65e79a67f48d1e20d9a36dcedba70012c7f04a36a00,PodSandboxId:8624016f639aaf4c8e4951ab0147800a842d0064
e05beacb2c1f53bb50501c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1695142195714799705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef805837e1ed2232a26f06444da1f9c236af68fabbbbeb83cf0d9339fc49de32,PodSandboxId:bd801f7050db4cbaeab1ba0fba097ce30423e589b63644
e75c7e4ba906ff2f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1695142195620426249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff1112eff53f9366557dc2a346e85947d6dcf9aed87bbc3b382e6de27026aff,PodSandboxId:b2c1bf6a673dfbc2
02d16fbffa8b56fd5bfbf4b113c868fcfe3be4f85c82d3ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1695142195412169886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bed76f4b-1bde-45c0-b4c4-638ae05ddb11 name=/runtime.v1.RuntimeService
/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.846106636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5af48572-8c88-4e87-9a24-e5e3261d9ba2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.846191214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5af48572-8c88-4e87-9a24-e5e3261d9ba2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 19 16:54:02 ingress-addon-legacy-845293 crio[718]: time="2023-09-19 16:54:02.849256728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7982d7abf9172e490ad053002eaa8d13ff6840373216d33770f1522273c61d23,PodSandboxId:6c71066717626cdede56acc8d51ed4266eacff967e61b7990d8f73cefff071b9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1695142432918273342,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kp7zr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: feb3e65b-1b48-49f8-8d99-b9a7ec9df842,},Annotations:map[string]string{io.kubernetes.container.hash: df44b1ff,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b257b0ffcf6c738bbc1afdf09f4ba44f00ca83a83e3ce66d14e60c80b630f38,PodSandboxId:a02c93e290f678f7df92013eda24875d45c9244c40fb356c8fdb91e2a079687e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1695142292352928989,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9868ceb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a59c8e97896791c81d062c578d6b9cb3cfe4931f836a21a6a5a9a39839cc1b0,PodSandboxId:4705cf7b5056310e9f58f7afd05128590917bfd2c6f801b60d077571488de1bb,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1695142272893803194,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-fj855,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2,},Annotations:map[string]string{io.kubernetes.container.hash: a5fab41a,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:61130b25b1d79af340bda3ea9c1e473c6ae525056dcb6ae2cb31dfdc4c7903fe,PodSandboxId:3b937b5521f9c7680c90dadce46f3f7d36597db307e57c23d04aadd524d26a13,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142263788458176,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qqb9q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 74bd4d56-564b-4c1c-96f2-a9c1ed2a6253,},Annotations:map[string]string{io.kubernetes.container.hash: abd3666e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255a779968e7ebf59c9377e4347610571bf4251d5df70e59c4bdf881f97b1074,PodSandboxId:1134c16ad7b489969830048b43b9a76f7be03af75454d66e07a09cc8aa7da352,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1695142262615821989,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s9kpc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e6c50039-1f9c-4971-a34b-56e1db7e0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 3a0fe64e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f2d00ad6018379db86b11cb997e45728edbe730b6000909f983601c878bcc02,PodSandboxId:a3cbe1b2db486a90c40e692efb1378c989116672c4aa4318634272c198c8be8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142220981201201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98ce6002-3669-44ba-815b-882fe4a8fb80,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9c7d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eff8a6f87c8c6bb60a2bb14c66de91d9e80ed95a5a756eeb78e542121f426fe,PodSandboxId:41699ce8a68a131bd4b9776bd63146cc42643c954d5893f55ede593a5fd537ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1695142220286446074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nbv66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e0ef9af-57fc-4e32-a5ba-bfa875a85f4c,},Annotations:map[string]string{io.kubernetes.container.hash: cbe4bc45,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726871d5c0858b65dcc761a2f269d04f09d3989f3268c9aa9fc6d8a70903d163,PodSandboxId:53652bb791460404627fd38defb7b3909014f1c75a4c0a9a1a999f4f710af474,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1695142219659164832,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-bsghp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2672543d-94d8-4929-a675-5bef7e7a88cc,},Annotations:map[string]string{io.kubernetes.container.hash: 3643109,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b9ac163b2927f69e2a2abb86b75042e4caa052dec2f4df01a1029c845123841,PodSa
ndboxId:047fa0b569a592fc1e32c751dc50b6dc61e022778def2907aa60e0561ce96a5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1695142197035812364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f140f9ee86e6c48678b8a7ba91a2d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 64fa79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d0c8cc9d972257ab4bba65e79a67f48d1e20d9a36dcedba70012c7f04a36a00,PodSandboxId:8624016f639aaf4c8e4951ab0147800a842d0064
e05beacb2c1f53bb50501c52,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1695142195714799705,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef805837e1ed2232a26f06444da1f9c236af68fabbbbeb83cf0d9339fc49de32,PodSandboxId:bd801f7050db4cbaeab1ba0fba097ce30423e589b63644
e75c7e4ba906ff2f3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1695142195620426249,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ff1112eff53f9366557dc2a346e85947d6dcf9aed87bbc3b382e6de27026aff,PodSandboxId:b2c1bf6a673dfbc2
02d16fbffa8b56fd5bfbf4b113c868fcfe3be4f85c82d3ab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1695142195412169886,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-845293,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5af48572-8c88-4e87-9a24-e5e3261d9ba2 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7982d7abf9172       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            10 seconds ago      Running             hello-world-app           0                   6c71066717626       hello-world-app-5f5d8b66bb-kp7zr
	0b257b0ffcf6c       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   a02c93e290f67       nginx
	9a59c8e978967       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   4705cf7b50563       ingress-nginx-controller-7fcf777cb7-fj855
	61130b25b1d79       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   3b937b5521f9c       ingress-nginx-admission-patch-qqb9q
	255a779968e7e       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   1134c16ad7b48       ingress-nginx-admission-create-s9kpc
	4f2d00ad60183       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   a3cbe1b2db486       storage-provisioner
	5eff8a6f87c8c       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   41699ce8a68a1       kube-proxy-nbv66
	726871d5c0858       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   53652bb791460       coredns-66bff467f8-bsghp
	1b9ac163b2927       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   047fa0b569a59       etcd-ingress-addon-legacy-845293
	3d0c8cc9d9722       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   8624016f639aa       kube-scheduler-ingress-addon-legacy-845293
	ef805837e1ed2       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   bd801f7050db4       kube-controller-manager-ingress-addon-legacy-845293
	0ff1112eff53f       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   b2c1bf6a673df       kube-apiserver-ingress-addon-legacy-845293
	
	* 
	* ==> coredns [726871d5c0858b65dcc761a2f269d04f09d3989f3268c9aa9fc6d8a70903d163] <==
	* [INFO] 10.244.0.6:33792 - 32443 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069992s
	[INFO] 10.244.0.6:33792 - 19208 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087642s
	[INFO] 10.244.0.6:33792 - 18770 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058501s
	[INFO] 10.244.0.6:33792 - 11879 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000462996s
	[INFO] 10.244.0.6:37178 - 19137 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.007078132s
	[INFO] 10.244.0.6:37178 - 37517 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000642107s
	[INFO] 10.244.0.6:37178 - 41657 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000127387s
	[INFO] 10.244.0.6:37178 - 46839 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00010957s
	[INFO] 10.244.0.6:37178 - 24636 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000654753s
	[INFO] 10.244.0.6:37178 - 64839 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000734768s
	[INFO] 10.244.0.6:37178 - 26524 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00024027s
	[INFO] 10.244.0.6:56719 - 36892 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089163s
	[INFO] 10.244.0.6:59042 - 30061 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120638s
	[INFO] 10.244.0.6:59042 - 54223 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045313s
	[INFO] 10.244.0.6:56719 - 58703 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000027998s
	[INFO] 10.244.0.6:59042 - 26514 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025415s
	[INFO] 10.244.0.6:56719 - 24336 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000021389s
	[INFO] 10.244.0.6:59042 - 11578 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032773s
	[INFO] 10.244.0.6:56719 - 47400 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084471s
	[INFO] 10.244.0.6:59042 - 55069 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070755s
	[INFO] 10.244.0.6:56719 - 44971 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056834s
	[INFO] 10.244.0.6:59042 - 35307 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057823s
	[INFO] 10.244.0.6:56719 - 28836 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000243038s
	[INFO] 10.244.0.6:59042 - 46418 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000120544s
	[INFO] 10.244.0.6:56719 - 37 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053461s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-845293
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-845293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=ingress-addon-legacy-845293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T16_50_04_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:50:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-845293
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:53:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:51:34 +0000   Tue, 19 Sep 2023 16:49:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:51:34 +0000   Tue, 19 Sep 2023 16:49:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:51:34 +0000   Tue, 19 Sep 2023 16:49:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:51:34 +0000   Tue, 19 Sep 2023 16:50:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ingress-addon-legacy-845293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dfc7dd8f6d14e9eb0a775122aacaf04
	  System UUID:                2dfc7dd8-f6d1-4e9e-b0a7-75122aacaf04
	  Boot ID:                    4898aef0-e49a-4d62-9281-1dcda6a9c20c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-kp7zr                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 coredns-66bff467f8-bsghp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m45s
	  kube-system                 etcd-ingress-addon-legacy-845293                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-845293             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-845293    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-proxy-nbv66                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-scheduler-ingress-addon-legacy-845293             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-845293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-845293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-845293 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m59s                kubelet     Node ingress-addon-legacy-845293 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s                kubelet     Node ingress-addon-legacy-845293 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s                kubelet     Node ingress-addon-legacy-845293 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m49s                kubelet     Node ingress-addon-legacy-845293 status is now: NodeReady
	  Normal  Starting                 3m43s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep19 16:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.099819] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.376778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.379971] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151207] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.001148] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.795587] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.111617] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.132829] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.099550] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.210162] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +8.432147] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +2.791830] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep19 16:50] systemd-fstab-generator[1422]: Ignoring "noauto" for root device
	[ +15.386327] kauditd_printk_skb: 6 callbacks suppressed
	[ +36.275110] kauditd_printk_skb: 20 callbacks suppressed
	[Sep19 16:51] kauditd_printk_skb: 6 callbacks suppressed
	[ +23.899682] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.651275] kauditd_printk_skb: 3 callbacks suppressed
	[Sep19 16:53] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [1b9ac163b2927f69e2a2abb86b75042e4caa052dec2f4df01a1029c845123841] <==
	* raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d became follower at term 1
	raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d switched to configuration voters=(4087365750677490525)
	2023-09-19 16:49:57.196221 W | auth: simple token is not cryptographically signed
	2023-09-19 16:49:57.200947 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-19 16:49:57.206188 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-19 16:49:57.206577 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-19 16:49:57.207029 I | embed: listening for peers on 192.168.39.244:2380
	2023-09-19 16:49:57.207769 I | etcdserver: 38b93d7e943acb5d as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d switched to configuration voters=(4087365750677490525)
	2023-09-19 16:49:57.208260 I | etcdserver/membership: added member 38b93d7e943acb5d [https://192.168.39.244:2380] to cluster ae521d247b31ac74
	raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d is starting a new election at term 1
	raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d became candidate at term 2
	raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d received MsgVoteResp from 38b93d7e943acb5d at term 2
	raft2023/09/19 16:49:57 INFO: 38b93d7e943acb5d became leader at term 2
	raft2023/09/19 16:49:57 INFO: raft.node: 38b93d7e943acb5d elected leader 38b93d7e943acb5d at term 2
	2023-09-19 16:49:57.387501 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-19 16:49:57.387852 I | etcdserver: published {Name:ingress-addon-legacy-845293 ClientURLs:[https://192.168.39.244:2379]} to cluster ae521d247b31ac74
	2023-09-19 16:49:57.387952 I | embed: ready to serve client requests
	2023-09-19 16:49:57.388304 I | embed: ready to serve client requests
	2023-09-19 16:49:57.394415 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-19 16:49:57.396231 I | embed: serving client requests on 192.168.39.244:2379
	2023-09-19 16:49:57.397051 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-19 16:49:57.397105 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-19 16:50:19.022272 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (141.738999ms) to execute
	2023-09-19 16:51:37.476982 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2214" took too long (302.793226ms) to execute
	
	* 
	* ==> kernel <==
	*  16:54:03 up 4 min,  0 users,  load average: 0.30, 0.38, 0.18
	Linux ingress-addon-legacy-845293 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0ff1112eff53f9366557dc2a346e85947d6dcf9aed87bbc3b382e6de27026aff] <==
	* I0919 16:50:00.520370       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0919 16:50:00.539731       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.244, ResourceVersion: 0, AdditionalErrorMsg: 
	I0919 16:50:00.570959       1 cache.go:39] Caches are synced for autoregister controller
	I0919 16:50:00.571239       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 16:50:00.580231       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0919 16:50:00.580321       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 16:50:00.620294       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0919 16:50:01.468718       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0919 16:50:01.468782       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 16:50:01.480354       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0919 16:50:01.485677       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0919 16:50:01.485750       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0919 16:50:01.995069       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 16:50:02.056326       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0919 16:50:02.138518       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.244]
	I0919 16:50:02.139410       1 controller.go:609] quota admission added evaluator for: endpoints
	I0919 16:50:02.144338       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 16:50:02.839978       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0919 16:50:03.934680       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0919 16:50:04.039880       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0919 16:50:04.451334       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 16:50:18.282996       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0919 16:50:18.550239       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0919 16:50:57.852261       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0919 16:51:26.952044       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [ef805837e1ed2232a26f06444da1f9c236af68fabbbbeb83cf0d9339fc49de32] <==
	* I0919 16:50:18.540746       1 shared_informer.go:230] Caches are synced for disruption 
	I0919 16:50:18.540768       1 disruption.go:339] Sending events to api server.
	I0919 16:50:18.565739       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a3e5324e-3bed-429b-9a12-6df569510b50", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0919 16:50:18.572366       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6db37cec-0580-4e76-9352-bbd90e63def3", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-tfkz9
	I0919 16:50:18.600070       1 request.go:621] Throttling request took 1.054959702s, request: GET:https://control-plane.minikube.internal:8443/apis/autoscaling/v1?timeout=32s
	I0919 16:50:18.602425       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6db37cec-0580-4e76-9352-bbd90e63def3", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-bsghp
	I0919 16:50:18.745013       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0919 16:50:18.745021       1 shared_informer.go:230] Caches are synced for endpoint 
	I0919 16:50:18.766870       1 shared_informer.go:230] Caches are synced for job 
	I0919 16:50:18.848053       1 shared_informer.go:230] Caches are synced for resource quota 
	I0919 16:50:18.865013       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0919 16:50:18.865060       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0919 16:50:18.894808       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0919 16:50:19.133815       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a3e5324e-3bed-429b-9a12-6df569510b50", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0919 16:50:19.193424       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6db37cec-0580-4e76-9352-bbd90e63def3", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-tfkz9
	I0919 16:50:19.210276       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0919 16:50:19.210333       1 shared_informer.go:230] Caches are synced for resource quota 
	I0919 16:50:57.822733       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ce450b01-db79-433d-ab7e-9fe68892d2fe", APIVersion:"apps/v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0919 16:50:57.842200       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3387df52-12ed-4543-a62e-b6c6c6ade27e", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-fj855
	I0919 16:50:57.891513       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"23f50af4-a2a2-4bed-8f19-6c1d355f17e4", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-s9kpc
	I0919 16:50:57.930789       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"939e3559-a6e9-460e-bb4c-400d945efa02", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-qqb9q
	I0919 16:51:02.824013       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"23f50af4-a2a2-4bed-8f19-6c1d355f17e4", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0919 16:51:04.827456       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"939e3559-a6e9-460e-bb4c-400d945efa02", APIVersion:"batch/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0919 16:53:48.898354       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"57eef88b-e1ab-4f66-9fff-d5b52fde831e", APIVersion:"apps/v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0919 16:53:48.922571       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"a38a31e7-90e2-452a-b8cf-737060a363c8", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-kp7zr
	
	* 
	* ==> kube-proxy [5eff8a6f87c8c6bb60a2bb14c66de91d9e80ed95a5a756eeb78e542121f426fe] <==
	* W0919 16:50:20.469496       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0919 16:50:20.477445       1 node.go:136] Successfully retrieved node IP: 192.168.39.244
	I0919 16:50:20.477493       1 server_others.go:186] Using iptables Proxier.
	I0919 16:50:20.477999       1 server.go:583] Version: v1.18.20
	I0919 16:50:20.479414       1 config.go:133] Starting endpoints config controller
	I0919 16:50:20.479571       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0919 16:50:20.479759       1 config.go:315] Starting service config controller
	I0919 16:50:20.479777       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0919 16:50:20.580139       1 shared_informer.go:230] Caches are synced for service config 
	I0919 16:50:20.580567       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [3d0c8cc9d972257ab4bba65e79a67f48d1e20d9a36dcedba70012c7f04a36a00] <==
	* I0919 16:50:00.583199       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0919 16:50:00.585307       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0919 16:50:00.585494       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:50:00.585637       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 16:50:00.585674       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0919 16:50:00.596893       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 16:50:00.598286       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:50:00.598558       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 16:50:00.598818       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 16:50:00.598985       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 16:50:00.599160       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 16:50:00.599247       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:50:00.599385       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:50:00.599464       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 16:50:00.599532       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 16:50:00.599662       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:50:00.599829       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 16:50:01.436181       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 16:50:01.438686       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 16:50:01.591091       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 16:50:01.613710       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 16:50:01.639026       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 16:50:01.774748       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:50:01.826432       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0919 16:50:04.185909       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:49:29 UTC, ends at Tue 2023-09-19 16:54:03 UTC. --
	Sep 19 16:51:04 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:51:04.991993    1432 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74bd4d56-564b-4c1c-96f2-a9c1ed2a6253-ingress-nginx-admission-token-phlsw" (OuterVolumeSpecName: "ingress-nginx-admission-token-phlsw") pod "74bd4d56-564b-4c1c-96f2-a9c1ed2a6253" (UID: "74bd4d56-564b-4c1c-96f2-a9c1ed2a6253"). InnerVolumeSpecName "ingress-nginx-admission-token-phlsw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:51:05 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:51:05.078565    1432 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-phlsw" (UniqueName: "kubernetes.io/secret/74bd4d56-564b-4c1c-96f2-a9c1ed2a6253-ingress-nginx-admission-token-phlsw") on node "ingress-addon-legacy-845293" DevicePath ""
	Sep 19 16:51:05 ingress-addon-legacy-845293 kubelet[1432]: W0919 16:51:05.820699    1432 pod_container_deletor.go:77] Container "3b937b5521f9c7680c90dadce46f3f7d36597db307e57c23d04aadd524d26a13" not found in pod's containers
	Sep 19 16:51:14 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:51:14.051845    1432 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 19 16:51:14 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:51:14.207966    1432 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-2vmjx" (UniqueName: "kubernetes.io/secret/34462b69-ab9b-4430-8318-dcb2cb17f7a5-minikube-ingress-dns-token-2vmjx") pod "kube-ingress-dns-minikube" (UID: "34462b69-ab9b-4430-8318-dcb2cb17f7a5")
	Sep 19 16:51:27 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:51:27.126078    1432 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 19 16:51:27 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:51:27.249717    1432 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5ln6l" (UniqueName: "kubernetes.io/secret/7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd-default-token-5ln6l") pod "nginx" (UID: "7b9d3ace-0c6b-41e4-ab2a-23cf2989f9bd")
	Sep 19 16:53:48 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:48.937557    1432 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 19 16:53:49 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:49.103680    1432 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5ln6l" (UniqueName: "kubernetes.io/secret/feb3e65b-1b48-49f8-8d99-b9a7ec9df842-default-token-5ln6l") pod "hello-world-app-5f5d8b66bb-kp7zr" (UID: "feb3e65b-1b48-49f8-8d99-b9a7ec9df842")
	Sep 19 16:53:50 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:50.800812    1432 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c40114cd1191e8465e2b47b36ea210da8e64d5ee476bac3af6a81802fe01250c
	Sep 19 16:53:50 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:50.849768    1432 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c40114cd1191e8465e2b47b36ea210da8e64d5ee476bac3af6a81802fe01250c
	Sep 19 16:53:50 ingress-addon-legacy-845293 kubelet[1432]: E0919 16:53:50.850182    1432 remote_runtime.go:295] ContainerStatus "c40114cd1191e8465e2b47b36ea210da8e64d5ee476bac3af6a81802fe01250c" from runtime service failed: rpc error: code = NotFound desc = could not find container "c40114cd1191e8465e2b47b36ea210da8e64d5ee476bac3af6a81802fe01250c": container with ID starting with c40114cd1191e8465e2b47b36ea210da8e64d5ee476bac3af6a81802fe01250c not found: ID does not exist
	Sep 19 16:53:50 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:50.909126    1432 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-2vmjx" (UniqueName: "kubernetes.io/secret/34462b69-ab9b-4430-8318-dcb2cb17f7a5-minikube-ingress-dns-token-2vmjx") pod "34462b69-ab9b-4430-8318-dcb2cb17f7a5" (UID: "34462b69-ab9b-4430-8318-dcb2cb17f7a5")
	Sep 19 16:53:50 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:50.911843    1432 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34462b69-ab9b-4430-8318-dcb2cb17f7a5-minikube-ingress-dns-token-2vmjx" (OuterVolumeSpecName: "minikube-ingress-dns-token-2vmjx") pod "34462b69-ab9b-4430-8318-dcb2cb17f7a5" (UID: "34462b69-ab9b-4430-8318-dcb2cb17f7a5"). InnerVolumeSpecName "minikube-ingress-dns-token-2vmjx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:53:51 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:51.009572    1432 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-2vmjx" (UniqueName: "kubernetes.io/secret/34462b69-ab9b-4430-8318-dcb2cb17f7a5-minikube-ingress-dns-token-2vmjx") on node "ingress-addon-legacy-845293" DevicePath ""
	Sep 19 16:53:55 ingress-addon-legacy-845293 kubelet[1432]: E0919 16:53:55.381294    1432 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-fj855.17865b105061a0b8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-fj855", UID:"9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2", APIVersion:"v1", ResourceVersion:"473", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-845293"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a9268d68402b8, ext:231535275488, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9268d68402b8, ext:231535275488, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-fj855.17865b105061a0b8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 19 16:53:55 ingress-addon-legacy-845293 kubelet[1432]: E0919 16:53:55.399046    1432 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-fj855.17865b105061a0b8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-fj855", UID:"9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2", APIVersion:"v1", ResourceVersion:"473", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-845293"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a9268d68402b8, ext:231535275488, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9268d751425d, ext:231548726660, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-fj855.17865b105061a0b8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 19 16:53:57 ingress-addon-legacy-845293 kubelet[1432]: W0919 16:53:57.830523    1432 pod_container_deletor.go:77] Container "4705cf7b5056310e9f58f7afd05128590917bfd2c6f801b60d077571488de1bb" not found in pod's containers
	Sep 19 16:53:59 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:59.537178    1432 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2-webhook-cert") pod "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2" (UID: "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2")
	Sep 19 16:53:59 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:59.537328    1432 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-4gkx4" (UniqueName: "kubernetes.io/secret/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2-ingress-nginx-token-4gkx4") pod "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2" (UID: "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2")
	Sep 19 16:53:59 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:59.542057    1432 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2" (UID: "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:53:59 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:59.544838    1432 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2-ingress-nginx-token-4gkx4" (OuterVolumeSpecName: "ingress-nginx-token-4gkx4") pod "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2" (UID: "9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2"). InnerVolumeSpecName "ingress-nginx-token-4gkx4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 16:53:59 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:59.637720    1432 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2-webhook-cert") on node "ingress-addon-legacy-845293" DevicePath ""
	Sep 19 16:53:59 ingress-addon-legacy-845293 kubelet[1432]: I0919 16:53:59.637782    1432 reconciler.go:319] Volume detached for volume "ingress-nginx-token-4gkx4" (UniqueName: "kubernetes.io/secret/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2-ingress-nginx-token-4gkx4") on node "ingress-addon-legacy-845293" DevicePath ""
	Sep 19 16:54:00 ingress-addon-legacy-845293 kubelet[1432]: W0919 16:54:00.490300    1432 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/9d1f120b-5c6a-4613-8767-4aaa6a6ee2a2/volumes" does not exist
	
	* 
	* ==> storage-provisioner [4f2d00ad6018379db86b11cb997e45728edbe730b6000909f983601c878bcc02] <==
	* I0919 16:50:21.085229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 16:50:21.095730       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 16:50:21.095865       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 16:50:21.103576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 16:50:21.105739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ebde39d-a8e5-4d73-b1ca-d1afc50c81a1", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-845293_2b821872-ef17-41cb-a522-1125f4727087 became leader
	I0919 16:50:21.106143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-845293_2b821872-ef17-41cb-a522-1125f4727087!
	I0919 16:50:21.209414       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-845293_2b821872-ef17-41cb-a522-1125f4727087!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-845293 -n ingress-addon-legacy-845293
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-845293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (169.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-m9sw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-m9sw8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-m9sw8 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (166.108955ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-m9sw8): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-xj8tc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-xj8tc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-xj8tc -- sh -c "ping -c 1 192.168.39.1": exit status 1 (180.546962ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-xj8tc): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553715 -n multinode-553715
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-553715 logs -n 25: (1.380704306s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-708182 ssh -- ls                    | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-708182 ssh --                       | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-708182                           | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	| start   | -p mount-start-2-708182                           | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC |                     |
	|         | --profile mount-start-2-708182                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-708182 ssh -- ls                    | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-708182 ssh --                       | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-708182                           | mount-start-2-708182 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	| delete  | -p mount-start-1-694120                           | mount-start-1-694120 | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 16:58 UTC |
	| start   | -p multinode-553715                               | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 17:00 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- apply -f                   | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- rollout                    | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- get pods -o                | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- get pods -o                | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-m9sw8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-xj8tc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-m9sw8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-xj8tc --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-m9sw8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-xj8tc -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- get pods -o                | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-m9sw8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC |                     |
	|         | busybox-5bc68d56bd-m9sw8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC | 19 Sep 23 17:00 UTC |
	|         | busybox-5bc68d56bd-xj8tc                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-553715 -- exec                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:00 UTC |                     |
	|         | busybox-5bc68d56bd-xj8tc -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:58:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:58:57.647681   25636 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:58:57.647773   25636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:58:57.647785   25636 out.go:309] Setting ErrFile to fd 2...
	I0919 16:58:57.647792   25636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:58:57.647984   25636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 16:58:57.648592   25636 out.go:303] Setting JSON to false
	I0919 16:58:57.649402   25636 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2488,"bootTime":1695140250,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:58:57.649461   25636 start.go:138] virtualization: kvm guest
	I0919 16:58:57.651461   25636 out.go:177] * [multinode-553715] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:58:57.652871   25636 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:58:57.654154   25636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:58:57.652914   25636 notify.go:220] Checking for updates...
	I0919 16:58:57.657064   25636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:58:57.658571   25636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:58:57.659821   25636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:58:57.661138   25636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:58:57.662535   25636 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:58:57.695988   25636 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 16:58:57.697436   25636 start.go:298] selected driver: kvm2
	I0919 16:58:57.697448   25636 start.go:902] validating driver "kvm2" against <nil>
	I0919 16:58:57.697457   25636 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:58:57.698711   25636 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:58:57.698858   25636 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:58:57.713174   25636 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:58:57.713221   25636 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 16:58:57.713395   25636 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 16:58:57.713446   25636 cni.go:84] Creating CNI manager for ""
	I0919 16:58:57.713457   25636 cni.go:136] 0 nodes found, recommending kindnet
	I0919 16:58:57.713466   25636 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 16:58:57.713475   25636 start_flags.go:321] config:
	{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:58:57.713575   25636 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:58:57.715191   25636 out.go:177] * Starting control plane node multinode-553715 in cluster multinode-553715
	I0919 16:58:57.716450   25636 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:58:57.716478   25636 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 16:58:57.716484   25636 cache.go:57] Caching tarball of preloaded images
	I0919 16:58:57.716559   25636 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 16:58:57.716569   25636 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 16:58:57.716855   25636 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 16:58:57.716876   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json: {Name:mk756568bdd46e0f208a486ba48bd7a5a617e764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:58:57.716994   25636 start.go:365] acquiring machines lock for multinode-553715: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 16:58:57.717021   25636 start.go:369] acquired machines lock for "multinode-553715" in 13.367µs
	I0919 16:58:57.717036   25636 start.go:93] Provisioning new machine with config: &{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 16:58:57.717096   25636 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 16:58:57.718680   25636 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 16:58:57.718784   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:58:57.718815   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:58:57.732355   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I0919 16:58:57.732812   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:58:57.733271   25636 main.go:141] libmachine: Using API Version  1
	I0919 16:58:57.733291   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:58:57.733587   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:58:57.733744   25636 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 16:58:57.733886   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:58:57.734023   25636 start.go:159] libmachine.API.Create for "multinode-553715" (driver="kvm2")
	I0919 16:58:57.734055   25636 client.go:168] LocalClient.Create starting
	I0919 16:58:57.734084   25636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 16:58:57.734115   25636 main.go:141] libmachine: Decoding PEM data...
	I0919 16:58:57.734130   25636 main.go:141] libmachine: Parsing certificate...
	I0919 16:58:57.734176   25636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 16:58:57.734196   25636 main.go:141] libmachine: Decoding PEM data...
	I0919 16:58:57.734208   25636 main.go:141] libmachine: Parsing certificate...
	I0919 16:58:57.734228   25636 main.go:141] libmachine: Running pre-create checks...
	I0919 16:58:57.734238   25636 main.go:141] libmachine: (multinode-553715) Calling .PreCreateCheck
	I0919 16:58:57.734547   25636 main.go:141] libmachine: (multinode-553715) Calling .GetConfigRaw
	I0919 16:58:57.734888   25636 main.go:141] libmachine: Creating machine...
	I0919 16:58:57.734901   25636 main.go:141] libmachine: (multinode-553715) Calling .Create
	I0919 16:58:57.735018   25636 main.go:141] libmachine: (multinode-553715) Creating KVM machine...
	I0919 16:58:57.736212   25636 main.go:141] libmachine: (multinode-553715) DBG | found existing default KVM network
	I0919 16:58:57.736961   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:57.736818   25660 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478f0}
	I0919 16:58:57.741727   25636 main.go:141] libmachine: (multinode-553715) DBG | trying to create private KVM network mk-multinode-553715 192.168.39.0/24...
	I0919 16:58:57.809030   25636 main.go:141] libmachine: (multinode-553715) DBG | private KVM network mk-multinode-553715 192.168.39.0/24 created
	I0919 16:58:57.809070   25636 main.go:141] libmachine: (multinode-553715) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715 ...
	I0919 16:58:57.809088   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:57.809014   25660 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:58:57.809111   25636 main.go:141] libmachine: (multinode-553715) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:58:57.809226   25636 main.go:141] libmachine: (multinode-553715) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 16:58:58.011077   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:58.010961   25660 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa...
	I0919 16:58:58.077939   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:58.077805   25660 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/multinode-553715.rawdisk...
	I0919 16:58:58.077981   25636 main.go:141] libmachine: (multinode-553715) DBG | Writing magic tar header
	I0919 16:58:58.077996   25636 main.go:141] libmachine: (multinode-553715) DBG | Writing SSH key tar header
	I0919 16:58:58.078005   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:58.077922   25660 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715 ...
	I0919 16:58:58.078036   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715
	I0919 16:58:58.078095   25636 main.go:141] libmachine: (multinode-553715) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715 (perms=drwx------)
	I0919 16:58:58.078121   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 16:58:58.078130   25636 main.go:141] libmachine: (multinode-553715) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 16:58:58.078144   25636 main.go:141] libmachine: (multinode-553715) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 16:58:58.078154   25636 main.go:141] libmachine: (multinode-553715) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 16:58:58.078165   25636 main.go:141] libmachine: (multinode-553715) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 16:58:58.078174   25636 main.go:141] libmachine: (multinode-553715) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 16:58:58.078183   25636 main.go:141] libmachine: (multinode-553715) Creating domain...
	I0919 16:58:58.078198   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:58:58.078214   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 16:58:58.078234   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 16:58:58.078248   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home/jenkins
	I0919 16:58:58.078265   25636 main.go:141] libmachine: (multinode-553715) DBG | Checking permissions on dir: /home
	I0919 16:58:58.078282   25636 main.go:141] libmachine: (multinode-553715) DBG | Skipping /home - not owner
	I0919 16:58:58.079153   25636 main.go:141] libmachine: (multinode-553715) define libvirt domain using xml: 
	I0919 16:58:58.079181   25636 main.go:141] libmachine: (multinode-553715) <domain type='kvm'>
	I0919 16:58:58.079193   25636 main.go:141] libmachine: (multinode-553715)   <name>multinode-553715</name>
	I0919 16:58:58.079210   25636 main.go:141] libmachine: (multinode-553715)   <memory unit='MiB'>2200</memory>
	I0919 16:58:58.079225   25636 main.go:141] libmachine: (multinode-553715)   <vcpu>2</vcpu>
	I0919 16:58:58.079238   25636 main.go:141] libmachine: (multinode-553715)   <features>
	I0919 16:58:58.079250   25636 main.go:141] libmachine: (multinode-553715)     <acpi/>
	I0919 16:58:58.079317   25636 main.go:141] libmachine: (multinode-553715)     <apic/>
	I0919 16:58:58.079347   25636 main.go:141] libmachine: (multinode-553715)     <pae/>
	I0919 16:58:58.079356   25636 main.go:141] libmachine: (multinode-553715)     
	I0919 16:58:58.079374   25636 main.go:141] libmachine: (multinode-553715)   </features>
	I0919 16:58:58.079406   25636 main.go:141] libmachine: (multinode-553715)   <cpu mode='host-passthrough'>
	I0919 16:58:58.079430   25636 main.go:141] libmachine: (multinode-553715)   
	I0919 16:58:58.079443   25636 main.go:141] libmachine: (multinode-553715)   </cpu>
	I0919 16:58:58.079454   25636 main.go:141] libmachine: (multinode-553715)   <os>
	I0919 16:58:58.079467   25636 main.go:141] libmachine: (multinode-553715)     <type>hvm</type>
	I0919 16:58:58.079480   25636 main.go:141] libmachine: (multinode-553715)     <boot dev='cdrom'/>
	I0919 16:58:58.079495   25636 main.go:141] libmachine: (multinode-553715)     <boot dev='hd'/>
	I0919 16:58:58.079513   25636 main.go:141] libmachine: (multinode-553715)     <bootmenu enable='no'/>
	I0919 16:58:58.079527   25636 main.go:141] libmachine: (multinode-553715)   </os>
	I0919 16:58:58.079538   25636 main.go:141] libmachine: (multinode-553715)   <devices>
	I0919 16:58:58.079551   25636 main.go:141] libmachine: (multinode-553715)     <disk type='file' device='cdrom'>
	I0919 16:58:58.079581   25636 main.go:141] libmachine: (multinode-553715)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/boot2docker.iso'/>
	I0919 16:58:58.079597   25636 main.go:141] libmachine: (multinode-553715)       <target dev='hdc' bus='scsi'/>
	I0919 16:58:58.079611   25636 main.go:141] libmachine: (multinode-553715)       <readonly/>
	I0919 16:58:58.079624   25636 main.go:141] libmachine: (multinode-553715)     </disk>
	I0919 16:58:58.079640   25636 main.go:141] libmachine: (multinode-553715)     <disk type='file' device='disk'>
	I0919 16:58:58.079659   25636 main.go:141] libmachine: (multinode-553715)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 16:58:58.079679   25636 main.go:141] libmachine: (multinode-553715)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/multinode-553715.rawdisk'/>
	I0919 16:58:58.079693   25636 main.go:141] libmachine: (multinode-553715)       <target dev='hda' bus='virtio'/>
	I0919 16:58:58.079702   25636 main.go:141] libmachine: (multinode-553715)     </disk>
	I0919 16:58:58.079708   25636 main.go:141] libmachine: (multinode-553715)     <interface type='network'>
	I0919 16:58:58.079718   25636 main.go:141] libmachine: (multinode-553715)       <source network='mk-multinode-553715'/>
	I0919 16:58:58.079726   25636 main.go:141] libmachine: (multinode-553715)       <model type='virtio'/>
	I0919 16:58:58.079735   25636 main.go:141] libmachine: (multinode-553715)     </interface>
	I0919 16:58:58.079743   25636 main.go:141] libmachine: (multinode-553715)     <interface type='network'>
	I0919 16:58:58.079752   25636 main.go:141] libmachine: (multinode-553715)       <source network='default'/>
	I0919 16:58:58.079763   25636 main.go:141] libmachine: (multinode-553715)       <model type='virtio'/>
	I0919 16:58:58.079823   25636 main.go:141] libmachine: (multinode-553715)     </interface>
	I0919 16:58:58.079847   25636 main.go:141] libmachine: (multinode-553715)     <serial type='pty'>
	I0919 16:58:58.079860   25636 main.go:141] libmachine: (multinode-553715)       <target port='0'/>
	I0919 16:58:58.079873   25636 main.go:141] libmachine: (multinode-553715)     </serial>
	I0919 16:58:58.079887   25636 main.go:141] libmachine: (multinode-553715)     <console type='pty'>
	I0919 16:58:58.079901   25636 main.go:141] libmachine: (multinode-553715)       <target type='serial' port='0'/>
	I0919 16:58:58.079914   25636 main.go:141] libmachine: (multinode-553715)     </console>
	I0919 16:58:58.079926   25636 main.go:141] libmachine: (multinode-553715)     <rng model='virtio'>
	I0919 16:58:58.079947   25636 main.go:141] libmachine: (multinode-553715)       <backend model='random'>/dev/random</backend>
	I0919 16:58:58.079965   25636 main.go:141] libmachine: (multinode-553715)     </rng>
	I0919 16:58:58.079982   25636 main.go:141] libmachine: (multinode-553715)     
	I0919 16:58:58.079998   25636 main.go:141] libmachine: (multinode-553715)     
	I0919 16:58:58.080012   25636 main.go:141] libmachine: (multinode-553715)   </devices>
	I0919 16:58:58.080025   25636 main.go:141] libmachine: (multinode-553715) </domain>
	I0919 16:58:58.080047   25636 main.go:141] libmachine: (multinode-553715) 
	I0919 16:58:58.084120   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:87:bc:f9 in network default
	I0919 16:58:58.084660   25636 main.go:141] libmachine: (multinode-553715) Ensuring networks are active...
	I0919 16:58:58.084678   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:58:58.085363   25636 main.go:141] libmachine: (multinode-553715) Ensuring network default is active
	I0919 16:58:58.085658   25636 main.go:141] libmachine: (multinode-553715) Ensuring network mk-multinode-553715 is active
	I0919 16:58:58.086105   25636 main.go:141] libmachine: (multinode-553715) Getting domain xml...
	I0919 16:58:58.086769   25636 main.go:141] libmachine: (multinode-553715) Creating domain...
	I0919 16:58:59.289970   25636 main.go:141] libmachine: (multinode-553715) Waiting to get IP...
	I0919 16:58:59.290726   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:58:59.291101   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:58:59.291135   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:59.291092   25660 retry.go:31] will retry after 223.112922ms: waiting for machine to come up
	I0919 16:58:59.515421   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:58:59.515932   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:58:59.515972   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:59.515899   25660 retry.go:31] will retry after 330.402631ms: waiting for machine to come up
	I0919 16:58:59.847437   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:58:59.847852   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:58:59.847884   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:58:59.847828   25660 retry.go:31] will retry after 402.896194ms: waiting for machine to come up
	I0919 16:59:00.252353   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:00.252799   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:00.252821   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:00.252760   25660 retry.go:31] will retry after 430.85235ms: waiting for machine to come up
	I0919 16:59:00.685383   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:00.685802   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:00.685834   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:00.685760   25660 retry.go:31] will retry after 721.624951ms: waiting for machine to come up
	I0919 16:59:01.408578   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:01.409032   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:01.409063   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:01.408986   25660 retry.go:31] will retry after 577.120697ms: waiting for machine to come up
	I0919 16:59:01.987445   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:01.987844   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:01.987873   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:01.987805   25660 retry.go:31] will retry after 866.086282ms: waiting for machine to come up
	I0919 16:59:02.855290   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:02.855716   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:02.855743   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:02.855663   25660 retry.go:31] will retry after 1.034586899s: waiting for machine to come up
	I0919 16:59:03.891845   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:03.892207   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:03.892239   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:03.892154   25660 retry.go:31] will retry after 1.164333888s: waiting for machine to come up
	I0919 16:59:05.058488   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:05.058848   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:05.058878   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:05.058805   25660 retry.go:31] will retry after 1.810668277s: waiting for machine to come up
	I0919 16:59:06.871731   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:06.872200   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:06.872231   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:06.872131   25660 retry.go:31] will retry after 2.454165182s: waiting for machine to come up
	I0919 16:59:09.328720   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:09.329073   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:09.329105   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:09.329027   25660 retry.go:31] will retry after 3.419275818s: waiting for machine to come up
	I0919 16:59:12.750434   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:12.750761   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:12.750783   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:12.750730   25660 retry.go:31] will retry after 3.16591894s: waiting for machine to come up
	I0919 16:59:15.919271   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:15.919616   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 16:59:15.919638   25636 main.go:141] libmachine: (multinode-553715) DBG | I0919 16:59:15.919555   25660 retry.go:31] will retry after 4.263650217s: waiting for machine to come up
	I0919 16:59:20.186439   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.186849   25636 main.go:141] libmachine: (multinode-553715) Found IP for machine: 192.168.39.38
	I0919 16:59:20.186875   25636 main.go:141] libmachine: (multinode-553715) Reserving static IP address...
	I0919 16:59:20.186893   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has current primary IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.187249   25636 main.go:141] libmachine: (multinode-553715) DBG | unable to find host DHCP lease matching {name: "multinode-553715", mac: "52:54:00:01:c6:86", ip: "192.168.39.38"} in network mk-multinode-553715
	I0919 16:59:20.255892   25636 main.go:141] libmachine: (multinode-553715) DBG | Getting to WaitForSSH function...
	I0919 16:59:20.255927   25636 main.go:141] libmachine: (multinode-553715) Reserved static IP address: 192.168.39.38
	I0919 16:59:20.255941   25636 main.go:141] libmachine: (multinode-553715) Waiting for SSH to be available...
	I0919 16:59:20.258322   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.258770   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.258808   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.258871   25636 main.go:141] libmachine: (multinode-553715) DBG | Using SSH client type: external
	I0919 16:59:20.258888   25636 main.go:141] libmachine: (multinode-553715) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa (-rw-------)
	I0919 16:59:20.258926   25636 main.go:141] libmachine: (multinode-553715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 16:59:20.258962   25636 main.go:141] libmachine: (multinode-553715) DBG | About to run SSH command:
	I0919 16:59:20.258986   25636 main.go:141] libmachine: (multinode-553715) DBG | exit 0
	I0919 16:59:20.351787   25636 main.go:141] libmachine: (multinode-553715) DBG | SSH cmd err, output: <nil>: 
	I0919 16:59:20.352068   25636 main.go:141] libmachine: (multinode-553715) KVM machine creation complete!
	I0919 16:59:20.352366   25636 main.go:141] libmachine: (multinode-553715) Calling .GetConfigRaw
	I0919 16:59:20.352855   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:20.353042   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:20.353224   25636 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 16:59:20.353237   25636 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 16:59:20.354522   25636 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 16:59:20.354538   25636 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 16:59:20.354547   25636 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 16:59:20.354557   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:20.356606   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.356887   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.356922   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.357027   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:20.357176   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.357315   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.357428   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:20.357542   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 16:59:20.357872   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 16:59:20.357886   25636 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 16:59:20.479345   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:59:20.479368   25636 main.go:141] libmachine: Detecting the provisioner...
	I0919 16:59:20.479379   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:20.482068   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.482485   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.482517   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.482701   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:20.482852   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.483022   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.483154   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:20.483317   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 16:59:20.483639   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 16:59:20.483651   25636 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 16:59:20.604946   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 16:59:20.605038   25636 main.go:141] libmachine: found compatible host: buildroot
	I0919 16:59:20.605054   25636 main.go:141] libmachine: Provisioning with buildroot...
	I0919 16:59:20.605067   25636 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 16:59:20.605363   25636 buildroot.go:166] provisioning hostname "multinode-553715"
	I0919 16:59:20.605398   25636 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 16:59:20.605640   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:20.608105   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.608474   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.608502   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.608589   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:20.608754   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.608887   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.608992   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:20.609135   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 16:59:20.609507   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 16:59:20.609523   25636 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553715 && echo "multinode-553715" | sudo tee /etc/hostname
	I0919 16:59:20.744726   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553715
	
	I0919 16:59:20.744753   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:20.747330   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.747643   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.747673   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.747816   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:20.748008   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.748183   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:20.748345   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:20.748513   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 16:59:20.748865   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 16:59:20.748884   25636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553715/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 16:59:20.879695   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:59:20.879726   25636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 16:59:20.879772   25636 buildroot.go:174] setting up certificates
	I0919 16:59:20.879793   25636 provision.go:83] configureAuth start
	I0919 16:59:20.879814   25636 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 16:59:20.880064   25636 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 16:59:20.883062   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.883474   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.883506   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.883588   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:20.885869   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.886183   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:20.886214   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:20.886317   25636 provision.go:138] copyHostCerts
	I0919 16:59:20.886342   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 16:59:20.886371   25636 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 16:59:20.886380   25636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 16:59:20.886435   25636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 16:59:20.886524   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 16:59:20.886542   25636 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 16:59:20.886549   25636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 16:59:20.886568   25636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 16:59:20.886609   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 16:59:20.886625   25636 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 16:59:20.886632   25636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 16:59:20.886647   25636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 16:59:20.886690   25636 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.multinode-553715 san=[192.168.39.38 192.168.39.38 localhost 127.0.0.1 minikube multinode-553715]
	I0919 16:59:21.035867   25636 provision.go:172] copyRemoteCerts
	I0919 16:59:21.035917   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 16:59:21.035939   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:21.038750   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.039075   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.039109   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.039271   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:21.039490   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.039659   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:21.039797   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 16:59:21.128703   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 16:59:21.128762   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 16:59:21.150947   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 16:59:21.151002   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 16:59:21.172901   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 16:59:21.172961   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 16:59:21.194568   25636 provision.go:86] duration metric: configureAuth took 314.76051ms
	I0919 16:59:21.194587   25636 buildroot.go:189] setting minikube options for container-runtime
	I0919 16:59:21.194739   25636 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:59:21.194809   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:21.197469   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.197814   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.197839   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.198026   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:21.198245   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.198431   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.198556   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:21.198704   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 16:59:21.199002   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 16:59:21.199019   25636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 16:59:21.502972   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 16:59:21.502998   25636 main.go:141] libmachine: Checking connection to Docker...
	I0919 16:59:21.503007   25636 main.go:141] libmachine: (multinode-553715) Calling .GetURL
	I0919 16:59:21.504176   25636 main.go:141] libmachine: (multinode-553715) DBG | Using libvirt version 6000000
	I0919 16:59:21.506323   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.506630   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.506662   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.506849   25636 main.go:141] libmachine: Docker is up and running!
	I0919 16:59:21.506864   25636 main.go:141] libmachine: Reticulating splines...
	I0919 16:59:21.506870   25636 client.go:171] LocalClient.Create took 23.772805554s
	I0919 16:59:21.506890   25636 start.go:167] duration metric: libmachine.API.Create for "multinode-553715" took 23.772868291s
	I0919 16:59:21.506900   25636 start.go:300] post-start starting for "multinode-553715" (driver="kvm2")
	I0919 16:59:21.506908   25636 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 16:59:21.506924   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:21.507138   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 16:59:21.507165   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:21.509615   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.510413   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.510439   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.510606   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:21.510788   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.510975   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:21.511097   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 16:59:21.602204   25636 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 16:59:21.606017   25636 command_runner.go:130] > NAME=Buildroot
	I0919 16:59:21.606038   25636 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 16:59:21.606044   25636 command_runner.go:130] > ID=buildroot
	I0919 16:59:21.606050   25636 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 16:59:21.606056   25636 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 16:59:21.606081   25636 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 16:59:21.606095   25636 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 16:59:21.606154   25636 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 16:59:21.606254   25636 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 16:59:21.606271   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /etc/ssl/certs/132392.pem
	I0919 16:59:21.606373   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 16:59:21.615180   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 16:59:21.636511   25636 start.go:303] post-start completed in 129.598205ms
	I0919 16:59:21.636565   25636 main.go:141] libmachine: (multinode-553715) Calling .GetConfigRaw
	I0919 16:59:21.637106   25636 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 16:59:21.639583   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.639931   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.639968   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.640138   25636 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 16:59:21.640305   25636 start.go:128] duration metric: createHost completed in 23.923202004s
	I0919 16:59:21.640325   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:21.642260   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.642627   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.642664   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.642765   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:21.642910   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.643074   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.643205   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:21.643355   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 16:59:21.643655   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 16:59:21.643667   25636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 16:59:21.764945   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142761.734967537
	
	I0919 16:59:21.764970   25636 fix.go:206] guest clock: 1695142761.734967537
	I0919 16:59:21.764977   25636 fix.go:219] Guest: 2023-09-19 16:59:21.734967537 +0000 UTC Remote: 2023-09-19 16:59:21.640315632 +0000 UTC m=+24.021711206 (delta=94.651905ms)
	I0919 16:59:21.765005   25636 fix.go:190] guest clock delta is within tolerance: 94.651905ms
	I0919 16:59:21.765010   25636 start.go:83] releasing machines lock for "multinode-553715", held for 24.047980572s
	I0919 16:59:21.765027   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:21.765272   25636 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 16:59:21.767671   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.768044   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.768080   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.768219   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:21.768709   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:21.768875   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:21.768950   25636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 16:59:21.768986   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:21.769056   25636 ssh_runner.go:195] Run: cat /version.json
	I0919 16:59:21.769074   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:21.771435   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.771705   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.771737   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.771795   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:21.771836   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.771950   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.772134   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:21.772182   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:21.772217   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:21.772349   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:21.772371   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 16:59:21.772508   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:21.772634   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:21.772739   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 16:59:21.857824   25636 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I0919 16:59:21.857995   25636 ssh_runner.go:195] Run: systemctl --version
	I0919 16:59:21.882059   25636 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 16:59:21.882118   25636 command_runner.go:130] > systemd 247 (247)
	I0919 16:59:21.882151   25636 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0919 16:59:21.882227   25636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 16:59:22.042771   25636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 16:59:22.048124   25636 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 16:59:22.048320   25636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 16:59:22.048369   25636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 16:59:22.064347   25636 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0919 16:59:22.064453   25636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 16:59:22.064465   25636 start.go:469] detecting cgroup driver to use...
	I0919 16:59:22.064506   25636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 16:59:22.078195   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:59:22.090879   25636 docker.go:196] disabling cri-docker service (if available) ...
	I0919 16:59:22.090923   25636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 16:59:22.103878   25636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 16:59:22.117017   25636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 16:59:22.130757   25636 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0919 16:59:22.221769   25636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 16:59:22.235186   25636 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0919 16:59:22.336282   25636 docker.go:212] disabling docker service ...
	I0919 16:59:22.336353   25636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 16:59:22.348679   25636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 16:59:22.360076   25636 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0919 16:59:22.360151   25636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 16:59:22.464716   25636 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0919 16:59:22.464792   25636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 16:59:22.476761   25636 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0919 16:59:22.476833   25636 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0919 16:59:22.568226   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 16:59:22.580119   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:59:22.596767   25636 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 16:59:22.596812   25636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 16:59:22.596859   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:59:22.605462   25636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 16:59:22.605518   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:59:22.614451   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:59:22.622926   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 16:59:22.631427   25636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 16:59:22.640196   25636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 16:59:22.647523   25636 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 16:59:22.647633   25636 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 16:59:22.647684   25636 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 16:59:22.659407   25636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 16:59:22.667289   25636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:59:22.768942   25636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 16:59:22.933302   25636 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 16:59:22.933383   25636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 16:59:22.938856   25636 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 16:59:22.938876   25636 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 16:59:22.938882   25636 command_runner.go:130] > Device: 16h/22d	Inode: 765         Links: 1
	I0919 16:59:22.938889   25636 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 16:59:22.938894   25636 command_runner.go:130] > Access: 2023-09-19 16:59:22.892381759 +0000
	I0919 16:59:22.938899   25636 command_runner.go:130] > Modify: 2023-09-19 16:59:22.892381759 +0000
	I0919 16:59:22.938908   25636 command_runner.go:130] > Change: 2023-09-19 16:59:22.892381759 +0000
	I0919 16:59:22.938914   25636 command_runner.go:130] >  Birth: -
	I0919 16:59:22.939162   25636 start.go:537] Will wait 60s for crictl version
	I0919 16:59:22.939210   25636 ssh_runner.go:195] Run: which crictl
	I0919 16:59:22.942534   25636 command_runner.go:130] > /usr/bin/crictl
	I0919 16:59:22.942584   25636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 16:59:22.979631   25636 command_runner.go:130] > Version:  0.1.0
	I0919 16:59:22.979651   25636 command_runner.go:130] > RuntimeName:  cri-o
	I0919 16:59:22.979656   25636 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0919 16:59:22.979661   25636 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 16:59:22.981280   25636 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 16:59:22.981337   25636 ssh_runner.go:195] Run: crio --version
	I0919 16:59:23.026558   25636 command_runner.go:130] > crio version 1.24.1
	I0919 16:59:23.026584   25636 command_runner.go:130] > Version:          1.24.1
	I0919 16:59:23.026594   25636 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 16:59:23.026601   25636 command_runner.go:130] > GitTreeState:     dirty
	I0919 16:59:23.026609   25636 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 16:59:23.026617   25636 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 16:59:23.026624   25636 command_runner.go:130] > Compiler:         gc
	I0919 16:59:23.026631   25636 command_runner.go:130] > Platform:         linux/amd64
	I0919 16:59:23.026656   25636 command_runner.go:130] > Linkmode:         dynamic
	I0919 16:59:23.026675   25636 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 16:59:23.026686   25636 command_runner.go:130] > SeccompEnabled:   true
	I0919 16:59:23.026694   25636 command_runner.go:130] > AppArmorEnabled:  false
	I0919 16:59:23.026776   25636 ssh_runner.go:195] Run: crio --version
	I0919 16:59:23.071492   25636 command_runner.go:130] > crio version 1.24.1
	I0919 16:59:23.071520   25636 command_runner.go:130] > Version:          1.24.1
	I0919 16:59:23.071530   25636 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 16:59:23.071537   25636 command_runner.go:130] > GitTreeState:     dirty
	I0919 16:59:23.071547   25636 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 16:59:23.071554   25636 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 16:59:23.071568   25636 command_runner.go:130] > Compiler:         gc
	I0919 16:59:23.071578   25636 command_runner.go:130] > Platform:         linux/amd64
	I0919 16:59:23.071587   25636 command_runner.go:130] > Linkmode:         dynamic
	I0919 16:59:23.071596   25636 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 16:59:23.071603   25636 command_runner.go:130] > SeccompEnabled:   true
	I0919 16:59:23.071607   25636 command_runner.go:130] > AppArmorEnabled:  false
	I0919 16:59:23.073448   25636 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 16:59:23.074701   25636 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 16:59:23.077757   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:23.078168   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:23.078201   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:23.078367   25636 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 16:59:23.082758   25636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:59:23.095338   25636 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:59:23.095389   25636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 16:59:23.125836   25636 command_runner.go:130] > {
	I0919 16:59:23.125857   25636 command_runner.go:130] >   "images": [
	I0919 16:59:23.125862   25636 command_runner.go:130] >   ]
	I0919 16:59:23.125867   25636 command_runner.go:130] > }
	I0919 16:59:23.127216   25636 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I0919 16:59:23.127278   25636 ssh_runner.go:195] Run: which lz4
	I0919 16:59:23.130783   25636 command_runner.go:130] > /usr/bin/lz4
	I0919 16:59:23.131076   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 16:59:23.131156   25636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 16:59:23.134839   25636 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 16:59:23.135154   25636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 16:59:23.135179   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I0919 16:59:24.900384   25636 crio.go:444] Took 1.769254 seconds to copy over tarball
	I0919 16:59:24.900476   25636 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 16:59:27.806889   25636 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.906385653s)
	I0919 16:59:27.806914   25636 crio.go:451] Took 2.906507 seconds to extract the tarball
	I0919 16:59:27.806926   25636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 16:59:27.851917   25636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 16:59:27.919677   25636 command_runner.go:130] > {
	I0919 16:59:27.919713   25636 command_runner.go:130] >   "images": [
	I0919 16:59:27.919721   25636 command_runner.go:130] >     {
	I0919 16:59:27.919733   25636 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0919 16:59:27.919738   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.919744   25636 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0919 16:59:27.919748   25636 command_runner.go:130] >       ],
	I0919 16:59:27.919752   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.919762   25636 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0919 16:59:27.919773   25636 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0919 16:59:27.919779   25636 command_runner.go:130] >       ],
	I0919 16:59:27.919790   25636 command_runner.go:130] >       "size": "65258016",
	I0919 16:59:27.919801   25636 command_runner.go:130] >       "uid": null,
	I0919 16:59:27.919812   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.919822   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.919830   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.919835   25636 command_runner.go:130] >     },
	I0919 16:59:27.919839   25636 command_runner.go:130] >     {
	I0919 16:59:27.919847   25636 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0919 16:59:27.919860   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.919872   25636 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0919 16:59:27.919883   25636 command_runner.go:130] >       ],
	I0919 16:59:27.919893   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.919910   25636 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0919 16:59:27.919931   25636 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0919 16:59:27.919940   25636 command_runner.go:130] >       ],
	I0919 16:59:27.919951   25636 command_runner.go:130] >       "size": "31470524",
	I0919 16:59:27.919961   25636 command_runner.go:130] >       "uid": null,
	I0919 16:59:27.919972   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.919982   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.919992   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.920001   25636 command_runner.go:130] >     },
	I0919 16:59:27.920011   25636 command_runner.go:130] >     {
	I0919 16:59:27.920023   25636 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0919 16:59:27.920031   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.920038   25636 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0919 16:59:27.920047   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920059   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.920075   25636 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0919 16:59:27.920090   25636 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0919 16:59:27.920100   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920110   25636 command_runner.go:130] >       "size": "53621675",
	I0919 16:59:27.920118   25636 command_runner.go:130] >       "uid": null,
	I0919 16:59:27.920126   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.920137   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.920148   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.920155   25636 command_runner.go:130] >     },
	I0919 16:59:27.920164   25636 command_runner.go:130] >     {
	I0919 16:59:27.920177   25636 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0919 16:59:27.920187   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.920198   25636 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0919 16:59:27.920205   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920211   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.920227   25636 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0919 16:59:27.920242   25636 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0919 16:59:27.920261   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920272   25636 command_runner.go:130] >       "size": "295456551",
	I0919 16:59:27.920281   25636 command_runner.go:130] >       "uid": {
	I0919 16:59:27.920289   25636 command_runner.go:130] >         "value": "0"
	I0919 16:59:27.920292   25636 command_runner.go:130] >       },
	I0919 16:59:27.920303   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.920314   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.920324   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.920330   25636 command_runner.go:130] >     },
	I0919 16:59:27.920339   25636 command_runner.go:130] >     {
	I0919 16:59:27.920352   25636 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I0919 16:59:27.920362   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.920373   25636 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I0919 16:59:27.920380   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920391   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.920417   25636 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I0919 16:59:27.920433   25636 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I0919 16:59:27.920442   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920457   25636 command_runner.go:130] >       "size": "127149008",
	I0919 16:59:27.920467   25636 command_runner.go:130] >       "uid": {
	I0919 16:59:27.920476   25636 command_runner.go:130] >         "value": "0"
	I0919 16:59:27.920485   25636 command_runner.go:130] >       },
	I0919 16:59:27.920497   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.920507   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.920517   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.920523   25636 command_runner.go:130] >     },
	I0919 16:59:27.920537   25636 command_runner.go:130] >     {
	I0919 16:59:27.920549   25636 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I0919 16:59:27.920556   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.920565   25636 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I0919 16:59:27.920574   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920585   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.920601   25636 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I0919 16:59:27.920617   25636 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I0919 16:59:27.920626   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920635   25636 command_runner.go:130] >       "size": "123171638",
	I0919 16:59:27.920644   25636 command_runner.go:130] >       "uid": {
	I0919 16:59:27.920654   25636 command_runner.go:130] >         "value": "0"
	I0919 16:59:27.920661   25636 command_runner.go:130] >       },
	I0919 16:59:27.920672   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.920682   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.920696   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.920705   25636 command_runner.go:130] >     },
	I0919 16:59:27.920714   25636 command_runner.go:130] >     {
	I0919 16:59:27.920723   25636 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I0919 16:59:27.920732   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.920745   25636 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I0919 16:59:27.920755   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920765   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.920779   25636 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I0919 16:59:27.920794   25636 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I0919 16:59:27.920802   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920810   25636 command_runner.go:130] >       "size": "74687895",
	I0919 16:59:27.920815   25636 command_runner.go:130] >       "uid": null,
	I0919 16:59:27.920828   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.920839   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.920850   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.920859   25636 command_runner.go:130] >     },
	I0919 16:59:27.920865   25636 command_runner.go:130] >     {
	I0919 16:59:27.920879   25636 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I0919 16:59:27.920888   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.920896   25636 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I0919 16:59:27.920905   25636 command_runner.go:130] >       ],
	I0919 16:59:27.920916   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.920972   25636 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I0919 16:59:27.920988   25636 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I0919 16:59:27.920994   25636 command_runner.go:130] >       ],
	I0919 16:59:27.921001   25636 command_runner.go:130] >       "size": "61485878",
	I0919 16:59:27.921011   25636 command_runner.go:130] >       "uid": {
	I0919 16:59:27.921021   25636 command_runner.go:130] >         "value": "0"
	I0919 16:59:27.921031   25636 command_runner.go:130] >       },
	I0919 16:59:27.921040   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.921053   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.921062   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.921070   25636 command_runner.go:130] >     },
	I0919 16:59:27.921073   25636 command_runner.go:130] >     {
	I0919 16:59:27.921082   25636 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0919 16:59:27.921093   25636 command_runner.go:130] >       "repoTags": [
	I0919 16:59:27.921105   25636 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0919 16:59:27.921115   25636 command_runner.go:130] >       ],
	I0919 16:59:27.921125   25636 command_runner.go:130] >       "repoDigests": [
	I0919 16:59:27.921139   25636 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0919 16:59:27.921153   25636 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0919 16:59:27.921159   25636 command_runner.go:130] >       ],
	I0919 16:59:27.921165   25636 command_runner.go:130] >       "size": "750414",
	I0919 16:59:27.921175   25636 command_runner.go:130] >       "uid": {
	I0919 16:59:27.921187   25636 command_runner.go:130] >         "value": "65535"
	I0919 16:59:27.921196   25636 command_runner.go:130] >       },
	I0919 16:59:27.921206   25636 command_runner.go:130] >       "username": "",
	I0919 16:59:27.921216   25636 command_runner.go:130] >       "spec": null,
	I0919 16:59:27.921234   25636 command_runner.go:130] >       "pinned": false
	I0919 16:59:27.921242   25636 command_runner.go:130] >     }
	I0919 16:59:27.921245   25636 command_runner.go:130] >   ]
	I0919 16:59:27.921250   25636 command_runner.go:130] > }
	I0919 16:59:27.921388   25636 crio.go:496] all images are preloaded for cri-o runtime.
	I0919 16:59:27.921402   25636 cache_images.go:84] Images are preloaded, skipping loading
	I0919 16:59:27.921479   25636 ssh_runner.go:195] Run: crio config
	I0919 16:59:27.977746   25636 command_runner.go:130] ! time="2023-09-19 16:59:27.956924445Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0919 16:59:27.977772   25636 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 16:59:27.985785   25636 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 16:59:27.985801   25636 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 16:59:27.985807   25636 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 16:59:27.985811   25636 command_runner.go:130] > #
	I0919 16:59:27.985818   25636 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 16:59:27.985824   25636 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 16:59:27.985830   25636 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 16:59:27.985841   25636 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 16:59:27.985851   25636 command_runner.go:130] > # reload'.
	I0919 16:59:27.985860   25636 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 16:59:27.985872   25636 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 16:59:27.985884   25636 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 16:59:27.985896   25636 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 16:59:27.985905   25636 command_runner.go:130] > [crio]
	I0919 16:59:27.985915   25636 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 16:59:27.985924   25636 command_runner.go:130] > # containers images, in this directory.
	I0919 16:59:27.985929   25636 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 16:59:27.985939   25636 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 16:59:27.985946   25636 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 16:59:27.985953   25636 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 16:59:27.985960   25636 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 16:59:27.985973   25636 command_runner.go:130] > storage_driver = "overlay"
	I0919 16:59:27.985987   25636 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 16:59:27.986000   25636 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 16:59:27.986008   25636 command_runner.go:130] > storage_option = [
	I0919 16:59:27.986016   25636 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 16:59:27.986022   25636 command_runner.go:130] > ]
	I0919 16:59:27.986030   25636 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 16:59:27.986036   25636 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 16:59:27.986041   25636 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 16:59:27.986046   25636 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 16:59:27.986055   25636 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 16:59:27.986062   25636 command_runner.go:130] > # always happen on a node reboot
	I0919 16:59:27.986075   25636 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 16:59:27.986088   25636 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 16:59:27.986100   25636 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 16:59:27.986115   25636 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 16:59:27.986123   25636 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0919 16:59:27.986131   25636 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 16:59:27.986141   25636 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 16:59:27.986147   25636 command_runner.go:130] > # internal_wipe = true
	I0919 16:59:27.986156   25636 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 16:59:27.986170   25636 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 16:59:27.986183   25636 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 16:59:27.986199   25636 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 16:59:27.986212   25636 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 16:59:27.986220   25636 command_runner.go:130] > [crio.api]
	I0919 16:59:27.986226   25636 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 16:59:27.986231   25636 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 16:59:27.986238   25636 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 16:59:27.986248   25636 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 16:59:27.986260   25636 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 16:59:27.986273   25636 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 16:59:27.986280   25636 command_runner.go:130] > # stream_port = "0"
	I0919 16:59:27.986292   25636 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 16:59:27.986303   25636 command_runner.go:130] > # stream_enable_tls = false
	I0919 16:59:27.986312   25636 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 16:59:27.986319   25636 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 16:59:27.986330   25636 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 16:59:27.986345   25636 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 16:59:27.986355   25636 command_runner.go:130] > # minutes.
	I0919 16:59:27.986362   25636 command_runner.go:130] > # stream_tls_cert = ""
	I0919 16:59:27.986376   25636 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 16:59:27.986390   25636 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 16:59:27.986399   25636 command_runner.go:130] > # stream_tls_key = ""
	I0919 16:59:27.986405   25636 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 16:59:27.986418   25636 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 16:59:27.986431   25636 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 16:59:27.986441   25636 command_runner.go:130] > # stream_tls_ca = ""
	I0919 16:59:27.986456   25636 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 16:59:27.986467   25636 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 16:59:27.986482   25636 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 16:59:27.986490   25636 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 16:59:27.986521   25636 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 16:59:27.986536   25636 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 16:59:27.986542   25636 command_runner.go:130] > [crio.runtime]
	I0919 16:59:27.986556   25636 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 16:59:27.986565   25636 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 16:59:27.986573   25636 command_runner.go:130] > # "nofile=1024:2048"
	I0919 16:59:27.986579   25636 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 16:59:27.986588   25636 command_runner.go:130] > # default_ulimits = [
	I0919 16:59:27.986594   25636 command_runner.go:130] > # ]
	I0919 16:59:27.986604   25636 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 16:59:27.986611   25636 command_runner.go:130] > # no_pivot = false
	I0919 16:59:27.986620   25636 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 16:59:27.986631   25636 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 16:59:27.986639   25636 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 16:59:27.986655   25636 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 16:59:27.986662   25636 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 16:59:27.986668   25636 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 16:59:27.986676   25636 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 16:59:27.986684   25636 command_runner.go:130] > # Cgroup setting for conmon
	I0919 16:59:27.986698   25636 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 16:59:27.986709   25636 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 16:59:27.986719   25636 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 16:59:27.986731   25636 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 16:59:27.986744   25636 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 16:59:27.986751   25636 command_runner.go:130] > conmon_env = [
	I0919 16:59:27.986762   25636 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 16:59:27.986770   25636 command_runner.go:130] > ]
	I0919 16:59:27.986780   25636 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 16:59:27.986789   25636 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 16:59:27.986805   25636 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 16:59:27.986814   25636 command_runner.go:130] > # default_env = [
	I0919 16:59:27.986820   25636 command_runner.go:130] > # ]
	I0919 16:59:27.986833   25636 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 16:59:27.986839   25636 command_runner.go:130] > # selinux = false
	I0919 16:59:27.986848   25636 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 16:59:27.986861   25636 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 16:59:27.986875   25636 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 16:59:27.986882   25636 command_runner.go:130] > # seccomp_profile = ""
	I0919 16:59:27.986894   25636 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 16:59:27.986906   25636 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 16:59:27.986918   25636 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 16:59:27.986925   25636 command_runner.go:130] > # which might increase security.
	I0919 16:59:27.986932   25636 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 16:59:27.986949   25636 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 16:59:27.986963   25636 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 16:59:27.986977   25636 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 16:59:27.986990   25636 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 16:59:27.987001   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 16:59:27.987009   25636 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 16:59:27.987016   25636 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 16:59:27.987026   25636 command_runner.go:130] > # the cgroup blockio controller.
	I0919 16:59:27.987037   25636 command_runner.go:130] > # blockio_config_file = ""
	I0919 16:59:27.987051   25636 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 16:59:27.987061   25636 command_runner.go:130] > # irqbalance daemon.
	I0919 16:59:27.987073   25636 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 16:59:27.987086   25636 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 16:59:27.987095   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 16:59:27.987099   25636 command_runner.go:130] > # rdt_config_file = ""
	I0919 16:59:27.987112   25636 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 16:59:27.987122   25636 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 16:59:27.987138   25636 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 16:59:27.987152   25636 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 16:59:27.987163   25636 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 16:59:27.987176   25636 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 16:59:27.987182   25636 command_runner.go:130] > # will be added.
	I0919 16:59:27.987188   25636 command_runner.go:130] > # default_capabilities = [
	I0919 16:59:27.987197   25636 command_runner.go:130] > # 	"CHOWN",
	I0919 16:59:27.987204   25636 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 16:59:27.987214   25636 command_runner.go:130] > # 	"FSETID",
	I0919 16:59:27.987221   25636 command_runner.go:130] > # 	"FOWNER",
	I0919 16:59:27.987231   25636 command_runner.go:130] > # 	"SETGID",
	I0919 16:59:27.987237   25636 command_runner.go:130] > # 	"SETUID",
	I0919 16:59:27.987246   25636 command_runner.go:130] > # 	"SETPCAP",
	I0919 16:59:27.987253   25636 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 16:59:27.987262   25636 command_runner.go:130] > # 	"KILL",
	I0919 16:59:27.987266   25636 command_runner.go:130] > # ]
	I0919 16:59:27.987277   25636 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 16:59:27.987291   25636 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 16:59:27.987302   25636 command_runner.go:130] > # default_sysctls = [
	I0919 16:59:27.987313   25636 command_runner.go:130] > # ]
	I0919 16:59:27.987324   25636 command_runner.go:130] > # List of devices on the host that a
	I0919 16:59:27.987337   25636 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 16:59:27.987347   25636 command_runner.go:130] > # allowed_devices = [
	I0919 16:59:27.987352   25636 command_runner.go:130] > # 	"/dev/fuse",
	I0919 16:59:27.987355   25636 command_runner.go:130] > # ]
	I0919 16:59:27.987362   25636 command_runner.go:130] > # List of additional devices. specified as
	I0919 16:59:27.987378   25636 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 16:59:27.987390   25636 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 16:59:27.987431   25636 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 16:59:27.987439   25636 command_runner.go:130] > # additional_devices = [
	I0919 16:59:27.987443   25636 command_runner.go:130] > # ]
	I0919 16:59:27.987451   25636 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 16:59:27.987461   25636 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 16:59:27.987469   25636 command_runner.go:130] > # 	"/etc/cdi",
	I0919 16:59:27.987479   25636 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 16:59:27.987485   25636 command_runner.go:130] > # ]
	I0919 16:59:27.987498   25636 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 16:59:27.987513   25636 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 16:59:27.987522   25636 command_runner.go:130] > # Defaults to false.
	I0919 16:59:27.987527   25636 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 16:59:27.987540   25636 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 16:59:27.987554   25636 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 16:59:27.987564   25636 command_runner.go:130] > # hooks_dir = [
	I0919 16:59:27.987574   25636 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 16:59:27.987583   25636 command_runner.go:130] > # ]
	I0919 16:59:27.987593   25636 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 16:59:27.987606   25636 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 16:59:27.987615   25636 command_runner.go:130] > # its default mounts from the following two files:
	I0919 16:59:27.987620   25636 command_runner.go:130] > #
	I0919 16:59:27.987630   25636 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 16:59:27.987645   25636 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 16:59:27.987658   25636 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 16:59:27.987667   25636 command_runner.go:130] > #
	I0919 16:59:27.987677   25636 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 16:59:27.987690   25636 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 16:59:27.987703   25636 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 16:59:27.987714   25636 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 16:59:27.987723   25636 command_runner.go:130] > #
	I0919 16:59:27.987734   25636 command_runner.go:130] > # default_mounts_file = ""
	I0919 16:59:27.987743   25636 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 16:59:27.987757   25636 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 16:59:27.987767   25636 command_runner.go:130] > pids_limit = 1024
	I0919 16:59:27.987779   25636 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 16:59:27.987788   25636 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 16:59:27.987798   25636 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 16:59:27.987816   25636 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 16:59:27.987826   25636 command_runner.go:130] > # log_size_max = -1
	I0919 16:59:27.987837   25636 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0919 16:59:27.987847   25636 command_runner.go:130] > # log_to_journald = false
	I0919 16:59:27.987857   25636 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 16:59:27.987867   25636 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 16:59:27.987872   25636 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 16:59:27.987883   25636 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 16:59:27.987899   25636 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 16:59:27.987910   25636 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 16:59:27.987923   25636 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 16:59:27.987930   25636 command_runner.go:130] > # read_only = false
	I0919 16:59:27.987943   25636 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 16:59:27.987954   25636 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 16:59:27.987961   25636 command_runner.go:130] > # live configuration reload.
	I0919 16:59:27.987968   25636 command_runner.go:130] > # log_level = "info"
	I0919 16:59:27.987981   25636 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 16:59:27.987993   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 16:59:27.988003   25636 command_runner.go:130] > # log_filter = ""
	I0919 16:59:27.988016   25636 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 16:59:27.988029   25636 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 16:59:27.988036   25636 command_runner.go:130] > # separated by comma.
	I0919 16:59:27.988043   25636 command_runner.go:130] > # uid_mappings = ""
	I0919 16:59:27.988050   25636 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 16:59:27.988063   25636 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 16:59:27.988073   25636 command_runner.go:130] > # separated by comma.
	I0919 16:59:27.988086   25636 command_runner.go:130] > # gid_mappings = ""
	I0919 16:59:27.988099   25636 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 16:59:27.988112   25636 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 16:59:27.988124   25636 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 16:59:27.988132   25636 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 16:59:27.988146   25636 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 16:59:27.988160   25636 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 16:59:27.988173   25636 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 16:59:27.988183   25636 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 16:59:27.988197   25636 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 16:59:27.988209   25636 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 16:59:27.988218   25636 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 16:59:27.988223   25636 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 16:59:27.988232   25636 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 16:59:27.988245   25636 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 16:59:27.988257   25636 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 16:59:27.988269   25636 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 16:59:27.988279   25636 command_runner.go:130] > drop_infra_ctr = false
	I0919 16:59:27.988295   25636 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 16:59:27.988307   25636 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 16:59:27.988322   25636 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 16:59:27.988333   25636 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 16:59:27.988346   25636 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 16:59:27.988358   25636 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 16:59:27.988368   25636 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 16:59:27.988383   25636 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 16:59:27.988390   25636 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 16:59:27.988397   25636 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 16:59:27.988420   25636 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0919 16:59:27.988438   25636 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0919 16:59:27.988445   25636 command_runner.go:130] > # default_runtime = "runc"
	I0919 16:59:27.988457   25636 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 16:59:27.988472   25636 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 16:59:27.988484   25636 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0919 16:59:27.988495   25636 command_runner.go:130] > # creation as a file is not desired either.
	I0919 16:59:27.988513   25636 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 16:59:27.988531   25636 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 16:59:27.988542   25636 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 16:59:27.988551   25636 command_runner.go:130] > # ]
	I0919 16:59:27.988560   25636 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 16:59:27.988570   25636 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 16:59:27.988580   25636 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0919 16:59:27.988594   25636 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0919 16:59:27.988603   25636 command_runner.go:130] > #
	I0919 16:59:27.988613   25636 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0919 16:59:27.988625   25636 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0919 16:59:27.988632   25636 command_runner.go:130] > #  runtime_type = "oci"
	I0919 16:59:27.988643   25636 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0919 16:59:27.988654   25636 command_runner.go:130] > #  privileged_without_host_devices = false
	I0919 16:59:27.988664   25636 command_runner.go:130] > #  allowed_annotations = []
	I0919 16:59:27.988674   25636 command_runner.go:130] > # Where:
	I0919 16:59:27.988683   25636 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0919 16:59:27.988697   25636 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0919 16:59:27.988710   25636 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 16:59:27.988726   25636 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 16:59:27.988735   25636 command_runner.go:130] > #   in $PATH.
	I0919 16:59:27.988741   25636 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0919 16:59:27.988752   25636 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 16:59:27.988766   25636 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0919 16:59:27.988776   25636 command_runner.go:130] > #   state.
	I0919 16:59:27.988789   25636 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 16:59:27.988801   25636 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 16:59:27.988815   25636 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 16:59:27.988824   25636 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 16:59:27.988831   25636 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 16:59:27.988848   25636 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 16:59:27.988860   25636 command_runner.go:130] > #   The currently recognized values are:
	I0919 16:59:27.988874   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 16:59:27.988889   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 16:59:27.988901   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 16:59:27.988911   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 16:59:27.988922   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 16:59:27.988942   25636 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 16:59:27.988955   25636 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 16:59:27.988969   25636 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0919 16:59:27.988980   25636 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 16:59:27.988990   25636 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 16:59:27.988998   25636 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 16:59:27.989003   25636 command_runner.go:130] > runtime_type = "oci"
	I0919 16:59:27.989013   25636 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 16:59:27.989021   25636 command_runner.go:130] > runtime_config_path = ""
	I0919 16:59:27.989031   25636 command_runner.go:130] > monitor_path = ""
	I0919 16:59:27.989038   25636 command_runner.go:130] > monitor_cgroup = ""
	I0919 16:59:27.989049   25636 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 16:59:27.989061   25636 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0919 16:59:27.989071   25636 command_runner.go:130] > # running containers
	I0919 16:59:27.989079   25636 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0919 16:59:27.989087   25636 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0919 16:59:27.989169   25636 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0919 16:59:27.989187   25636 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0919 16:59:27.989199   25636 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0919 16:59:27.989211   25636 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0919 16:59:27.989222   25636 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0919 16:59:27.989232   25636 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0919 16:59:27.989243   25636 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0919 16:59:27.989251   25636 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0919 16:59:27.989257   25636 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 16:59:27.989269   25636 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 16:59:27.989285   25636 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 16:59:27.989301   25636 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 16:59:27.989320   25636 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 16:59:27.989334   25636 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 16:59:27.989347   25636 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 16:59:27.989364   25636 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 16:59:27.989377   25636 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 16:59:27.989391   25636 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 16:59:27.989400   25636 command_runner.go:130] > # Example:
	I0919 16:59:27.989409   25636 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 16:59:27.989423   25636 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 16:59:27.989431   25636 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 16:59:27.989439   25636 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 16:59:27.989450   25636 command_runner.go:130] > # cpuset = 0
	I0919 16:59:27.989460   25636 command_runner.go:130] > # cpushares = "0-1"
	I0919 16:59:27.989469   25636 command_runner.go:130] > # Where:
	I0919 16:59:27.989478   25636 command_runner.go:130] > # The workload name is workload-type.
	I0919 16:59:27.989492   25636 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 16:59:27.989504   25636 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 16:59:27.989514   25636 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 16:59:27.989524   25636 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 16:59:27.989537   25636 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 16:59:27.989546   25636 command_runner.go:130] > # 
	I0919 16:59:27.989560   25636 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 16:59:27.989566   25636 command_runner.go:130] > #
	I0919 16:59:27.989579   25636 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 16:59:27.989591   25636 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 16:59:27.989601   25636 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 16:59:27.989612   25636 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 16:59:27.989626   25636 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 16:59:27.989636   25636 command_runner.go:130] > [crio.image]
	I0919 16:59:27.989653   25636 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 16:59:27.989664   25636 command_runner.go:130] > # default_transport = "docker://"
	I0919 16:59:27.989674   25636 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 16:59:27.989687   25636 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 16:59:27.989696   25636 command_runner.go:130] > # global_auth_file = ""
	I0919 16:59:27.989706   25636 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 16:59:27.989717   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 16:59:27.989728   25636 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0919 16:59:27.989742   25636 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 16:59:27.989752   25636 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 16:59:27.989760   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 16:59:27.989766   25636 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 16:59:27.989773   25636 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 16:59:27.989780   25636 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 16:59:27.989790   25636 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 16:59:27.989804   25636 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 16:59:27.989812   25636 command_runner.go:130] > # pause_command = "/pause"
	I0919 16:59:27.989822   25636 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 16:59:27.989832   25636 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 16:59:27.989842   25636 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 16:59:27.989851   25636 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 16:59:27.989859   25636 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 16:59:27.989863   25636 command_runner.go:130] > # signature_policy = ""
	I0919 16:59:27.989873   25636 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 16:59:27.989883   25636 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 16:59:27.989890   25636 command_runner.go:130] > # changing them here.
	I0919 16:59:27.989897   25636 command_runner.go:130] > # insecure_registries = [
	I0919 16:59:27.989903   25636 command_runner.go:130] > # ]
	I0919 16:59:27.989917   25636 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 16:59:27.989926   25636 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 16:59:27.989936   25636 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 16:59:27.989946   25636 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 16:59:27.989951   25636 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 16:59:27.989964   25636 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 16:59:27.989974   25636 command_runner.go:130] > # CNI plugins.
	I0919 16:59:27.989981   25636 command_runner.go:130] > [crio.network]
	I0919 16:59:27.989994   25636 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 16:59:27.990006   25636 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 16:59:27.990016   25636 command_runner.go:130] > # cni_default_network = ""
	I0919 16:59:27.990028   25636 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 16:59:27.990035   25636 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 16:59:27.990044   25636 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 16:59:27.990054   25636 command_runner.go:130] > # plugin_dirs = [
	I0919 16:59:27.990061   25636 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 16:59:27.990070   25636 command_runner.go:130] > # ]
	I0919 16:59:27.990080   25636 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 16:59:27.990090   25636 command_runner.go:130] > [crio.metrics]
	I0919 16:59:27.990104   25636 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 16:59:27.990113   25636 command_runner.go:130] > enable_metrics = true
	I0919 16:59:27.990117   25636 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 16:59:27.990128   25636 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 16:59:27.990148   25636 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 16:59:27.990162   25636 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 16:59:27.990172   25636 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 16:59:27.990182   25636 command_runner.go:130] > # metrics_collectors = [
	I0919 16:59:27.990189   25636 command_runner.go:130] > # 	"operations",
	I0919 16:59:27.990198   25636 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 16:59:27.990206   25636 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 16:59:27.990210   25636 command_runner.go:130] > # 	"operations_errors",
	I0919 16:59:27.990216   25636 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 16:59:27.990221   25636 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 16:59:27.990228   25636 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 16:59:27.990236   25636 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 16:59:27.990247   25636 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 16:59:27.990257   25636 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 16:59:27.990265   25636 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 16:59:27.990275   25636 command_runner.go:130] > # 	"containers_oom_total",
	I0919 16:59:27.990282   25636 command_runner.go:130] > # 	"containers_oom",
	I0919 16:59:27.990292   25636 command_runner.go:130] > # 	"processes_defunct",
	I0919 16:59:27.990305   25636 command_runner.go:130] > # 	"operations_total",
	I0919 16:59:27.990312   25636 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 16:59:27.990317   25636 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 16:59:27.990321   25636 command_runner.go:130] > # 	"operations_errors_total",
	I0919 16:59:27.990326   25636 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 16:59:27.990333   25636 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 16:59:27.990337   25636 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 16:59:27.990343   25636 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 16:59:27.990347   25636 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 16:59:27.990352   25636 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 16:59:27.990356   25636 command_runner.go:130] > # ]
	I0919 16:59:27.990361   25636 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 16:59:27.990366   25636 command_runner.go:130] > # metrics_port = 9090
	I0919 16:59:27.990371   25636 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 16:59:27.990380   25636 command_runner.go:130] > # metrics_socket = ""
	I0919 16:59:27.990389   25636 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 16:59:27.990404   25636 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 16:59:27.990417   25636 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 16:59:27.990431   25636 command_runner.go:130] > # certificate on any modification event.
	I0919 16:59:27.990441   25636 command_runner.go:130] > # metrics_cert = ""
	I0919 16:59:27.990449   25636 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 16:59:27.990459   25636 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 16:59:27.990463   25636 command_runner.go:130] > # metrics_key = ""
	I0919 16:59:27.990471   25636 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 16:59:27.990477   25636 command_runner.go:130] > [crio.tracing]
	I0919 16:59:27.990483   25636 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 16:59:27.990487   25636 command_runner.go:130] > # enable_tracing = false
	I0919 16:59:27.990493   25636 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 16:59:27.990500   25636 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 16:59:27.990505   25636 command_runner.go:130] > # Number of samples to collect per million spans.
	I0919 16:59:27.990512   25636 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 16:59:27.990517   25636 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 16:59:27.990523   25636 command_runner.go:130] > [crio.stats]
	I0919 16:59:27.990529   25636 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 16:59:27.990537   25636 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 16:59:27.990541   25636 command_runner.go:130] > # stats_collection_period = 0
	I0919 16:59:27.990619   25636 cni.go:84] Creating CNI manager for ""
	I0919 16:59:27.990633   25636 cni.go:136] 1 nodes found, recommending kindnet
	I0919 16:59:27.990654   25636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 16:59:27.990670   25636 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553715 NodeName:multinode-553715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 16:59:27.990793   25636 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553715"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 16:59:27.990854   25636 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 16:59:27.990900   25636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 16:59:28.000098   25636 command_runner.go:130] > kubeadm
	I0919 16:59:28.000114   25636 command_runner.go:130] > kubectl
	I0919 16:59:28.000120   25636 command_runner.go:130] > kubelet
	I0919 16:59:28.000142   25636 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 16:59:28.000199   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 16:59:28.008834   25636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0919 16:59:28.025954   25636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 16:59:28.042628   25636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0919 16:59:28.059795   25636 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0919 16:59:28.063581   25636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:59:28.076824   25636 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715 for IP: 192.168.39.38
	I0919 16:59:28.076856   25636 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.077006   25636 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 16:59:28.077060   25636 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 16:59:28.077119   25636 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key
	I0919 16:59:28.077135   25636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt with IP's: []
	I0919 16:59:28.312625   25636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt ...
	I0919 16:59:28.312668   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt: {Name:mkfee1af06cd983aeeb70807bacf8d5c7cf495bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.312857   25636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key ...
	I0919 16:59:28.312871   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key: {Name:mk60e166a8b1193fabb7858b6e7bc29e20c4c2b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.312979   25636 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key.383c1efe
	I0919 16:59:28.312996   25636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt.383c1efe with IP's: [192.168.39.38 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 16:59:28.498940   25636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt.383c1efe ...
	I0919 16:59:28.498970   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt.383c1efe: {Name:mkb9ccbba3611447ea6c75e718af8039f0e1fb5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.499149   25636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key.383c1efe ...
	I0919 16:59:28.499164   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key.383c1efe: {Name:mk99972c37e90d6b853a2cea0b7e2a0d8823cde1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.499274   25636 certs.go:337] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt.383c1efe -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt
	I0919 16:59:28.499378   25636 certs.go:341] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key.383c1efe -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key
	I0919 16:59:28.499459   25636 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key
	I0919 16:59:28.499479   25636 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt with IP's: []
	I0919 16:59:28.780344   25636 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt ...
	I0919 16:59:28.780376   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt: {Name:mkc91a435b7ea3fd0b62df89d3c146d7dcdc885c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.780573   25636 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key ...
	I0919 16:59:28.780587   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key: {Name:mk386710f2bfe9413fa099745018a82ffede8d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:28.780680   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 16:59:28.780702   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 16:59:28.780712   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 16:59:28.780730   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 16:59:28.780742   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 16:59:28.780752   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 16:59:28.780762   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 16:59:28.780774   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 16:59:28.780823   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 16:59:28.780855   25636 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 16:59:28.780867   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 16:59:28.780890   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 16:59:28.780912   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 16:59:28.780933   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 16:59:28.780968   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 16:59:28.780992   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /usr/share/ca-certificates/132392.pem
	I0919 16:59:28.781005   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:59:28.781017   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem -> /usr/share/ca-certificates/13239.pem
	I0919 16:59:28.781498   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 16:59:28.808796   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 16:59:28.834676   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 16:59:28.860676   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 16:59:28.885859   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 16:59:28.908804   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 16:59:28.931460   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 16:59:28.953614   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 16:59:28.976162   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 16:59:28.998271   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 16:59:29.022419   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 16:59:29.045370   25636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 16:59:29.060716   25636 ssh_runner.go:195] Run: openssl version
	I0919 16:59:29.066055   25636 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 16:59:29.066254   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 16:59:29.075496   25636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 16:59:29.080046   25636 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 16:59:29.080136   25636 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 16:59:29.080173   25636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 16:59:29.085546   25636 command_runner.go:130] > 3ec20f2e
	I0919 16:59:29.085610   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 16:59:29.094858   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 16:59:29.104216   25636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:59:29.108419   25636 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:59:29.108575   25636 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:59:29.108626   25636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:59:29.114031   25636 command_runner.go:130] > b5213941
	I0919 16:59:29.114247   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 16:59:29.123543   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 16:59:29.132658   25636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 16:59:29.137126   25636 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 16:59:29.137523   25636 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 16:59:29.137569   25636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 16:59:29.143049   25636 command_runner.go:130] > 51391683
	I0919 16:59:29.143212   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 16:59:29.152395   25636 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 16:59:29.156164   25636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:59:29.156280   25636 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:59:29.156330   25636 kubeadm.go:404] StartCluster: {Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:59:29.156403   25636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 16:59:29.156465   25636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 16:59:29.197630   25636 cri.go:89] found id: ""
	I0919 16:59:29.197713   25636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 16:59:29.206347   25636 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0919 16:59:29.206375   25636 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0919 16:59:29.206384   25636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0919 16:59:29.206544   25636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 16:59:29.215056   25636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 16:59:29.224154   25636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0919 16:59:29.224182   25636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0919 16:59:29.224207   25636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0919 16:59:29.224219   25636 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 16:59:29.224260   25636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 16:59:29.224302   25636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 16:59:29.580098   25636 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:59:29.580126   25636 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:59:41.760221   25636 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 16:59:41.760252   25636 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I0919 16:59:41.760300   25636 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 16:59:41.760314   25636 command_runner.go:130] > [preflight] Running pre-flight checks
	I0919 16:59:41.760440   25636 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 16:59:41.760452   25636 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 16:59:41.760567   25636 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 16:59:41.760581   25636 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 16:59:41.760723   25636 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 16:59:41.760747   25636 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 16:59:41.760826   25636 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 16:59:41.762604   25636 out.go:204]   - Generating certificates and keys ...
	I0919 16:59:41.760886   25636 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 16:59:41.762703   25636 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 16:59:41.762720   25636 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0919 16:59:41.762791   25636 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 16:59:41.762807   25636 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0919 16:59:41.762896   25636 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 16:59:41.762909   25636 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 16:59:41.762980   25636 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 16:59:41.763009   25636 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0919 16:59:41.763110   25636 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 16:59:41.763127   25636 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0919 16:59:41.763198   25636 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 16:59:41.763210   25636 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0919 16:59:41.763357   25636 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 16:59:41.763370   25636 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0919 16:59:41.763520   25636 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-553715] and IPs [192.168.39.38 127.0.0.1 ::1]
	I0919 16:59:41.763531   25636 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-553715] and IPs [192.168.39.38 127.0.0.1 ::1]
	I0919 16:59:41.763609   25636 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 16:59:41.763630   25636 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0919 16:59:41.763778   25636 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-553715] and IPs [192.168.39.38 127.0.0.1 ::1]
	I0919 16:59:41.763791   25636 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-553715] and IPs [192.168.39.38 127.0.0.1 ::1]
	I0919 16:59:41.763887   25636 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 16:59:41.763921   25636 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 16:59:41.764016   25636 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 16:59:41.764034   25636 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 16:59:41.764106   25636 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 16:59:41.764117   25636 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0919 16:59:41.764184   25636 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 16:59:41.764199   25636 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 16:59:41.764252   25636 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 16:59:41.764259   25636 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 16:59:41.764300   25636 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 16:59:41.764306   25636 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 16:59:41.764379   25636 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 16:59:41.764396   25636 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 16:59:41.764510   25636 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 16:59:41.764529   25636 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 16:59:41.764639   25636 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 16:59:41.764654   25636 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 16:59:41.764741   25636 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 16:59:41.766380   25636 out.go:204]   - Booting up control plane ...
	I0919 16:59:41.764783   25636 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 16:59:41.766492   25636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 16:59:41.766506   25636 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 16:59:41.766601   25636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 16:59:41.766612   25636 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 16:59:41.766689   25636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 16:59:41.766709   25636 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 16:59:41.766843   25636 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:59:41.766864   25636 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:59:41.766965   25636 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:59:41.766977   25636 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:59:41.767019   25636 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 16:59:41.767030   25636 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 16:59:41.767231   25636 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 16:59:41.767245   25636 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 16:59:41.767352   25636 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504107 seconds
	I0919 16:59:41.767371   25636 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.504107 seconds
	I0919 16:59:41.767495   25636 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 16:59:41.767508   25636 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 16:59:41.767668   25636 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 16:59:41.767679   25636 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 16:59:41.767741   25636 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 16:59:41.767752   25636 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0919 16:59:41.768019   25636 kubeadm.go:322] [mark-control-plane] Marking the node multinode-553715 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 16:59:41.768038   25636 command_runner.go:130] > [mark-control-plane] Marking the node multinode-553715 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 16:59:41.768106   25636 kubeadm.go:322] [bootstrap-token] Using token: 8eqre5.vpgvlcisgjxmqool
	I0919 16:59:41.768114   25636 command_runner.go:130] > [bootstrap-token] Using token: 8eqre5.vpgvlcisgjxmqool
	I0919 16:59:41.769670   25636 out.go:204]   - Configuring RBAC rules ...
	I0919 16:59:41.769798   25636 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 16:59:41.769810   25636 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 16:59:41.769907   25636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 16:59:41.769918   25636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 16:59:41.770102   25636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 16:59:41.770122   25636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 16:59:41.770273   25636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 16:59:41.770286   25636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 16:59:41.770416   25636 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 16:59:41.770423   25636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 16:59:41.770537   25636 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 16:59:41.770554   25636 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 16:59:41.770709   25636 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 16:59:41.770723   25636 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 16:59:41.770780   25636 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0919 16:59:41.770789   25636 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 16:59:41.770845   25636 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0919 16:59:41.770854   25636 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 16:59:41.770861   25636 kubeadm.go:322] 
	I0919 16:59:41.770941   25636 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0919 16:59:41.770952   25636 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 16:59:41.770958   25636 kubeadm.go:322] 
	I0919 16:59:41.771060   25636 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0919 16:59:41.771068   25636 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 16:59:41.771074   25636 kubeadm.go:322] 
	I0919 16:59:41.771116   25636 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0919 16:59:41.771124   25636 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 16:59:41.771204   25636 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 16:59:41.771221   25636 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 16:59:41.771283   25636 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 16:59:41.771294   25636 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 16:59:41.771298   25636 kubeadm.go:322] 
	I0919 16:59:41.771343   25636 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0919 16:59:41.771348   25636 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 16:59:41.771352   25636 kubeadm.go:322] 
	I0919 16:59:41.771410   25636 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 16:59:41.771419   25636 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 16:59:41.771423   25636 kubeadm.go:322] 
	I0919 16:59:41.771474   25636 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0919 16:59:41.771484   25636 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 16:59:41.771554   25636 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 16:59:41.771564   25636 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 16:59:41.771637   25636 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 16:59:41.771647   25636 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 16:59:41.771654   25636 kubeadm.go:322] 
	I0919 16:59:41.771755   25636 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0919 16:59:41.771766   25636 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 16:59:41.771845   25636 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0919 16:59:41.771853   25636 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 16:59:41.771857   25636 kubeadm.go:322] 
	I0919 16:59:41.771954   25636 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 8eqre5.vpgvlcisgjxmqool \
	I0919 16:59:41.771961   25636 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8eqre5.vpgvlcisgjxmqool \
	I0919 16:59:41.772098   25636 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 16:59:41.772109   25636 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 16:59:41.772156   25636 command_runner.go:130] > 	--control-plane 
	I0919 16:59:41.772173   25636 kubeadm.go:322] 	--control-plane 
	I0919 16:59:41.772181   25636 kubeadm.go:322] 
	I0919 16:59:41.772296   25636 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0919 16:59:41.772306   25636 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 16:59:41.772316   25636 kubeadm.go:322] 
	I0919 16:59:41.772455   25636 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 8eqre5.vpgvlcisgjxmqool \
	I0919 16:59:41.772468   25636 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8eqre5.vpgvlcisgjxmqool \
	I0919 16:59:41.772643   25636 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 16:59:41.772658   25636 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 16:59:41.772684   25636 cni.go:84] Creating CNI manager for ""
	I0919 16:59:41.772710   25636 cni.go:136] 1 nodes found, recommending kindnet
	I0919 16:59:41.774299   25636 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 16:59:41.776064   25636 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 16:59:41.803047   25636 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 16:59:41.803068   25636 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 16:59:41.803075   25636 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 16:59:41.803081   25636 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 16:59:41.803087   25636 command_runner.go:130] > Access: 2023-09-19 16:59:10.727580671 +0000
	I0919 16:59:41.803092   25636 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 16:59:41.803098   25636 command_runner.go:130] > Change: 2023-09-19 16:59:08.979580671 +0000
	I0919 16:59:41.803102   25636 command_runner.go:130] >  Birth: -
	I0919 16:59:41.803166   25636 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 16:59:41.803178   25636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 16:59:41.858490   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 16:59:42.842537   25636 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0919 16:59:42.848987   25636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0919 16:59:42.868510   25636 command_runner.go:130] > serviceaccount/kindnet created
	I0919 16:59:42.882695   25636 command_runner.go:130] > daemonset.apps/kindnet created
	I0919 16:59:42.885658   25636 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.027119642s)
	I0919 16:59:42.885712   25636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 16:59:42.885785   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:42.885818   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=multinode-553715 minikube.k8s.io/updated_at=2023_09_19T16_59_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:42.916967   25636 command_runner.go:130] > -16
	I0919 16:59:42.917046   25636 ops.go:34] apiserver oom_adj: -16
	I0919 16:59:43.126504   25636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0919 16:59:43.126548   25636 command_runner.go:130] > node/multinode-553715 labeled
	I0919 16:59:43.126634   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:43.222819   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:43.222990   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:43.306733   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:43.809093   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:43.901172   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:44.309664   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:44.387128   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:44.809078   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:44.893615   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:45.309697   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:45.398175   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:45.809813   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:45.895381   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:46.309166   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:46.410778   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:46.809547   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:46.923213   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:47.308795   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:47.403571   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:47.809787   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:47.892794   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:48.309461   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:48.400079   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:48.809677   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:48.893612   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:49.308822   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:49.399683   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:49.809211   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:49.901370   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:50.309447   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:50.394044   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:50.809464   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:50.891761   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:51.309444   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:51.395615   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:51.809360   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:51.898659   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:52.308810   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:52.426819   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:52.809002   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:52.903771   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:53.309363   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:53.449243   25636 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:59:53.809278   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:59:53.916557   25636 command_runner.go:130] > NAME      SECRETS   AGE
	I0919 16:59:53.917860   25636 command_runner.go:130] > default   0         0s
	I0919 16:59:53.921891   25636 kubeadm.go:1081] duration metric: took 11.036151187s to wait for elevateKubeSystemPrivileges.
	I0919 16:59:53.921916   25636 kubeadm.go:406] StartCluster complete in 24.765589319s
	I0919 16:59:53.921942   25636 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:53.922018   25636 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:59:53.922593   25636 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:59:53.922837   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 16:59:53.922941   25636 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 16:59:53.923043   25636 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:59:53.923052   25636 addons.go:69] Setting storage-provisioner=true in profile "multinode-553715"
	I0919 16:59:53.923061   25636 addons.go:69] Setting default-storageclass=true in profile "multinode-553715"
	I0919 16:59:53.923094   25636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-553715"
	I0919 16:59:53.923141   25636 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:59:53.923068   25636 addons.go:231] Setting addon storage-provisioner=true in "multinode-553715"
	I0919 16:59:53.923217   25636 host.go:66] Checking if "multinode-553715" exists ...
	I0919 16:59:53.923457   25636 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:59:53.923576   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:59:53.923599   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:59:53.923555   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:59:53.923729   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:59:53.924223   25636 cert_rotation.go:137] Starting client certificate rotation controller
	I0919 16:59:53.924553   25636 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:59:53.924569   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:53.924579   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:53.924590   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:53.934562   25636 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0919 16:59:53.934578   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:53.934584   25636 round_trippers.go:580]     Audit-Id: c4f701b7-9536-4c1b-aa1a-91cd87874f80
	I0919 16:59:53.934590   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:53.934595   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:53.934605   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:53.934610   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:53.934616   25636 round_trippers.go:580]     Content-Length: 291
	I0919 16:59:53.934621   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:53 GMT
	I0919 16:59:53.934639   25636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"270","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:59:53.934931   25636 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"270","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:59:53.934969   25636 round_trippers.go:463] PUT https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:59:53.934977   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:53.934984   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:53.934992   25636 round_trippers.go:473]     Content-Type: application/json
	I0919 16:59:53.934998   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:53.938861   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0919 16:59:53.939252   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:59:53.939531   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I0919 16:59:53.939796   25636 main.go:141] libmachine: Using API Version  1
	I0919 16:59:53.939820   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:59:53.939852   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:59:53.940166   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:59:53.940311   25636 main.go:141] libmachine: Using API Version  1
	I0919 16:59:53.940334   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:59:53.940313   25636 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 16:59:53.940683   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:59:53.941221   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:59:53.941265   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:59:53.942575   25636 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:59:53.942897   25636 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:59:53.943331   25636 round_trippers.go:463] GET https://192.168.39.38:8443/apis/storage.k8s.io/v1/storageclasses
	I0919 16:59:53.943349   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:53.943361   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:53.943374   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:53.945681   25636 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 16:59:53.945709   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:53.945719   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:53.945730   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:53.945741   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:53.945753   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:53.945765   25636 round_trippers.go:580]     Content-Length: 291
	I0919 16:59:53.945775   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:53 GMT
	I0919 16:59:53.945785   25636 round_trippers.go:580]     Audit-Id: f0258f04-f50d-495b-b9d0-aa121fd02d49
	I0919 16:59:53.945811   25636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"340","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:59:53.945944   25636 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:59:53.945960   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:53.945970   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:53.945982   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:53.949072   25636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 16:59:53.949092   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:53.949101   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:53.949109   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:53.949117   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:53.949128   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:53.949139   25636 round_trippers.go:580]     Content-Length: 109
	I0919 16:59:53.949150   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:53 GMT
	I0919 16:59:53.949157   25636 round_trippers.go:580]     Audit-Id: df3916f2-5355-4bc9-955b-c886657cb00a
	I0919 16:59:53.949181   25636 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"340"},"items":[]}
	I0919 16:59:53.949434   25636 addons.go:231] Setting addon default-storageclass=true in "multinode-553715"
	I0919 16:59:53.949477   25636 host.go:66] Checking if "multinode-553715" exists ...
	I0919 16:59:53.949523   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:59:53.949541   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:53.949548   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:53 GMT
	I0919 16:59:53.949557   25636 round_trippers.go:580]     Audit-Id: bdfc382d-6a40-458e-8ca7-e5d380cead0b
	I0919 16:59:53.949568   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:53.949578   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:53.949588   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:53.949597   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:53.949610   25636 round_trippers.go:580]     Content-Length: 291
	I0919 16:59:53.949634   25636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"340","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:59:53.949708   25636 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553715" context rescaled to 1 replicas
	I0919 16:59:53.949731   25636 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 16:59:53.952677   25636 out.go:177] * Verifying Kubernetes components...
	I0919 16:59:53.949853   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:59:53.954212   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:59:53.954258   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:59:53.956963   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0919 16:59:53.957363   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:59:53.957831   25636 main.go:141] libmachine: Using API Version  1
	I0919 16:59:53.957852   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:59:53.958228   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:59:53.958429   25636 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 16:59:53.960225   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:53.961789   25636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:59:53.962955   25636 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:59:53.962968   25636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 16:59:53.962981   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:53.965913   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:53.966402   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:53.966430   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:53.966709   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:53.966902   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:53.967067   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:53.967226   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 16:59:53.971101   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0919 16:59:53.971535   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:59:53.972006   25636 main.go:141] libmachine: Using API Version  1
	I0919 16:59:53.972021   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:59:53.972323   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:59:53.972885   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:59:53.972928   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:59:53.987320   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38561
	I0919 16:59:53.987698   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:59:53.988195   25636 main.go:141] libmachine: Using API Version  1
	I0919 16:59:53.988214   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:59:53.988508   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:59:53.988704   25636 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 16:59:53.990368   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 16:59:53.990631   25636 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 16:59:53.990651   25636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 16:59:53.990670   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 16:59:53.993853   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:53.994299   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 16:59:53.994326   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 16:59:53.994535   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 16:59:53.994738   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 16:59:53.994891   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 16:59:53.995067   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 16:59:54.159767   25636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:59:54.169942   25636 command_runner.go:130] > apiVersion: v1
	I0919 16:59:54.169962   25636 command_runner.go:130] > data:
	I0919 16:59:54.169969   25636 command_runner.go:130] >   Corefile: |
	I0919 16:59:54.169974   25636 command_runner.go:130] >     .:53 {
	I0919 16:59:54.169979   25636 command_runner.go:130] >         errors
	I0919 16:59:54.169986   25636 command_runner.go:130] >         health {
	I0919 16:59:54.169992   25636 command_runner.go:130] >            lameduck 5s
	I0919 16:59:54.169998   25636 command_runner.go:130] >         }
	I0919 16:59:54.170004   25636 command_runner.go:130] >         ready
	I0919 16:59:54.170014   25636 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0919 16:59:54.170028   25636 command_runner.go:130] >            pods insecure
	I0919 16:59:54.170039   25636 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0919 16:59:54.170051   25636 command_runner.go:130] >            ttl 30
	I0919 16:59:54.170058   25636 command_runner.go:130] >         }
	I0919 16:59:54.170070   25636 command_runner.go:130] >         prometheus :9153
	I0919 16:59:54.170080   25636 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0919 16:59:54.170092   25636 command_runner.go:130] >            max_concurrent 1000
	I0919 16:59:54.170106   25636 command_runner.go:130] >         }
	I0919 16:59:54.170116   25636 command_runner.go:130] >         cache 30
	I0919 16:59:54.170124   25636 command_runner.go:130] >         loop
	I0919 16:59:54.170134   25636 command_runner.go:130] >         reload
	I0919 16:59:54.170143   25636 command_runner.go:130] >         loadbalance
	I0919 16:59:54.170157   25636 command_runner.go:130] >     }
	I0919 16:59:54.170171   25636 command_runner.go:130] > kind: ConfigMap
	I0919 16:59:54.170182   25636 command_runner.go:130] > metadata:
	I0919 16:59:54.170199   25636 command_runner.go:130] >   creationTimestamp: "2023-09-19T16:59:41Z"
	I0919 16:59:54.170209   25636 command_runner.go:130] >   name: coredns
	I0919 16:59:54.170220   25636 command_runner.go:130] >   namespace: kube-system
	I0919 16:59:54.170230   25636 command_runner.go:130] >   resourceVersion: "266"
	I0919 16:59:54.170240   25636 command_runner.go:130] >   uid: a0f116ef-660a-48dc-b415-9d01634b45c7
	I0919 16:59:54.171671   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 16:59:54.171903   25636 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:59:54.172211   25636 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:59:54.172509   25636 node_ready.go:35] waiting up to 6m0s for node "multinode-553715" to be "Ready" ...
	I0919 16:59:54.172609   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:54.172617   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:54.172624   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:54.172632   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:54.178037   25636 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 16:59:54.179519   25636 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 16:59:54.179531   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:54.179537   25636 round_trippers.go:580]     Audit-Id: 7d73f84a-1a7f-407d-a688-024036520fe0
	I0919 16:59:54.179543   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:54.179553   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:54.179564   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:54.179575   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:54.179586   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:54 GMT
	I0919 16:59:54.179892   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:54.180396   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:54.180432   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:54.180443   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:54.180452   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:54.192108   25636 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0919 16:59:54.192124   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:54.192131   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:54 GMT
	I0919 16:59:54.192139   25636 round_trippers.go:580]     Audit-Id: 297106a5-a9e1-4d1a-b4b2-2002367d9afc
	I0919 16:59:54.192147   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:54.192157   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:54.192169   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:54.192175   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:54.198531   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:54.699281   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:54.699302   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:54.699310   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:54.699320   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:54.702155   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:54.702180   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:54.702191   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:54.702199   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:54.702207   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:54 GMT
	I0919 16:59:54.702215   25636 round_trippers.go:580]     Audit-Id: 4bc88ace-4224-416d-a818-327080b313bc
	I0919 16:59:54.702225   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:54.702232   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:54.702527   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:55.007514   25636 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0919 16:59:55.014719   25636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0919 16:59:55.027059   25636 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0919 16:59:55.040416   25636 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0919 16:59:55.048758   25636 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0919 16:59:55.062952   25636 command_runner.go:130] > pod/storage-provisioner created
	I0919 16:59:55.065375   25636 command_runner.go:130] > configmap/coredns replaced
	I0919 16:59:55.065383   25636 main.go:141] libmachine: Making call to close driver server
	I0919 16:59:55.065402   25636 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 16:59:55.065406   25636 main.go:141] libmachine: (multinode-553715) Calling .Close
	I0919 16:59:55.065448   25636 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0919 16:59:55.065485   25636 main.go:141] libmachine: Making call to close driver server
	I0919 16:59:55.065498   25636 main.go:141] libmachine: (multinode-553715) Calling .Close
	I0919 16:59:55.065693   25636 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:59:55.065716   25636 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:59:55.065727   25636 main.go:141] libmachine: Making call to close driver server
	I0919 16:59:55.065735   25636 main.go:141] libmachine: (multinode-553715) Calling .Close
	I0919 16:59:55.065825   25636 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:59:55.065844   25636 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:59:55.065844   25636 main.go:141] libmachine: (multinode-553715) DBG | Closing plugin on server side
	I0919 16:59:55.065854   25636 main.go:141] libmachine: Making call to close driver server
	I0919 16:59:55.065863   25636 main.go:141] libmachine: (multinode-553715) Calling .Close
	I0919 16:59:55.065938   25636 main.go:141] libmachine: (multinode-553715) DBG | Closing plugin on server side
	I0919 16:59:55.067687   25636 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:59:55.067716   25636 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:59:55.067751   25636 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:59:55.067769   25636 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:59:55.067785   25636 main.go:141] libmachine: Making call to close driver server
	I0919 16:59:55.067800   25636 main.go:141] libmachine: (multinode-553715) Calling .Close
	I0919 16:59:55.067997   25636 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:59:55.068011   25636 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:59:55.068037   25636 main.go:141] libmachine: (multinode-553715) DBG | Closing plugin on server side
	I0919 16:59:55.070068   25636 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 16:59:55.071545   25636 addons.go:502] enable addons completed in 1.148604861s: enabled=[storage-provisioner default-storageclass]
	I0919 16:59:55.199076   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:55.199100   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:55.199110   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:55.199119   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:55.201679   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:55.201702   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:55.201711   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:55.201719   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:55.201726   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:55.201738   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:55 GMT
	I0919 16:59:55.201745   25636 round_trippers.go:580]     Audit-Id: 11a51d20-d428-4256-949c-e353e15c2764
	I0919 16:59:55.201753   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:55.201968   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:55.699623   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:55.699649   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:55.699657   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:55.699662   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:55.702543   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:55.702564   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:55.702571   25636 round_trippers.go:580]     Audit-Id: c2f9a183-1d6c-451f-b3cf-76d9302a0280
	I0919 16:59:55.702576   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:55.702583   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:55.702592   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:55.702601   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:55.702610   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:55 GMT
	I0919 16:59:55.702725   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:56.199419   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:56.199445   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:56.199454   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:56.199460   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:56.202010   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:56.202033   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:56.202049   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:56 GMT
	I0919 16:59:56.202058   25636 round_trippers.go:580]     Audit-Id: 4ba39763-db82-4119-9df3-37950d3478c6
	I0919 16:59:56.202067   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:56.202076   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:56.202085   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:56.202101   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:56.202375   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:56.202780   25636 node_ready.go:58] node "multinode-553715" has status "Ready":"False"
	I0919 16:59:56.699091   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:56.699122   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:56.699135   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:56.699145   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:56.702139   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:56.702157   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:56.702164   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:56.702170   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:56 GMT
	I0919 16:59:56.702178   25636 round_trippers.go:580]     Audit-Id: a00384b1-53c7-4207-a939-0685574eb7dc
	I0919 16:59:56.702186   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:56.702194   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:56.702202   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:56.702505   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:57.199123   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:57.199149   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:57.199157   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:57.199163   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:57.203352   25636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:59:57.203375   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:57.203384   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:57.203393   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:57 GMT
	I0919 16:59:57.203398   25636 round_trippers.go:580]     Audit-Id: f047f248-e40f-49d2-9c93-e49d3f8b2bf0
	I0919 16:59:57.203403   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:57.203409   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:57.203413   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:57.203640   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:57.699698   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:57.699723   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:57.699731   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:57.699737   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:57.702229   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:57.702246   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:57.702256   25636 round_trippers.go:580]     Audit-Id: 8caf16dd-a6c7-4a72-9a59-177805083408
	I0919 16:59:57.702269   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:57.702277   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:57.702284   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:57.702290   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:57.702295   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:57 GMT
	I0919 16:59:57.702530   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:58.199836   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:58.199858   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:58.199866   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:58.199872   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:58.202712   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:58.202736   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:58.202745   25636 round_trippers.go:580]     Audit-Id: b893710e-d88e-4195-987d-de2b41b2301a
	I0919 16:59:58.202754   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:58.202761   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:58.202770   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:58.202778   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:58.202790   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:58 GMT
	I0919 16:59:58.203210   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:58.203672   25636 node_ready.go:58] node "multinode-553715" has status "Ready":"False"
	I0919 16:59:58.699918   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:58.699941   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:58.699949   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:58.699955   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:58.702503   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:58.702522   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:58.702529   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:58 GMT
	I0919 16:59:58.702536   25636 round_trippers.go:580]     Audit-Id: 3ddb8f68-306b-4ce7-a417-652cc44f05cc
	I0919 16:59:58.702544   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:58.702551   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:58.702559   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:58.702567   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:58.702975   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:59.199192   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:59.199215   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:59.199223   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:59.199229   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:59.202213   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:59.202233   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:59.202240   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:59.202246   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:59.202251   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:59.202256   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:59 GMT
	I0919 16:59:59.202261   25636 round_trippers.go:580]     Audit-Id: 91ef2744-0c05-4159-b751-fcb5013157be
	I0919 16:59:59.202266   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:59.202662   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 16:59:59.699309   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 16:59:59.699332   25636 round_trippers.go:469] Request Headers:
	I0919 16:59:59.699340   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:59:59.699345   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:59:59.701690   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:59:59.701715   25636 round_trippers.go:577] Response Headers:
	I0919 16:59:59.701724   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 16:59:59.701733   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:59:59 GMT
	I0919 16:59:59.701741   25636 round_trippers.go:580]     Audit-Id: 5c2bcf30-eb54-4816-a089-6c98179c2fdd
	I0919 16:59:59.701748   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:59:59.701756   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 16:59:59.701764   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 16:59:59.702118   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"331","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I0919 17:00:00.199782   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:00.199809   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.199816   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.199822   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.203329   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:00.203356   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.203366   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.203374   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.203382   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.203391   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.203398   25636 round_trippers.go:580]     Audit-Id: 0b6d7935-0fc8-4e8c-8f78-d062b9e03b17
	I0919 17:00:00.203405   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.203739   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:00.204051   25636 node_ready.go:49] node "multinode-553715" has status "Ready":"True"
	I0919 17:00:00.204067   25636 node_ready.go:38] duration metric: took 6.031526854s waiting for node "multinode-553715" to be "Ready" ...
	I0919 17:00:00.204075   25636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:00:00.204152   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:00:00.204161   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.204168   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.204175   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.207960   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:00.207982   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.207991   25636 round_trippers.go:580]     Audit-Id: 475df160-57e7-419f-b3e8-b6e142e90580
	I0919 17:00:00.208000   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.208008   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.208016   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.208024   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.208037   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.209207   25636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"419","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53878 chars]
	I0919 17:00:00.212165   25636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:00.212228   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:00.212240   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.212247   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.212253   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.214866   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:00.214879   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.214885   25636 round_trippers.go:580]     Audit-Id: 045d8aba-8e66-4957-a0d6-ea2beb2df4a7
	I0919 17:00:00.214890   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.214895   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.214900   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.214905   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.214910   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.215066   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"419","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 17:00:00.215514   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:00.215529   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.215537   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.215542   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.218299   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:00.218310   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.218316   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.218321   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.218326   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.218331   25636 round_trippers.go:580]     Audit-Id: 019d1594-f0fc-4f6a-8e1c-f0b6f5f1aebf
	I0919 17:00:00.218336   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.218341   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.218583   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:00.218958   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:00.218971   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.218978   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.218984   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.223000   25636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:00:00.223012   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.223018   25636 round_trippers.go:580]     Audit-Id: 871eed0a-a338-40fc-95da-c07763c653c1
	I0919 17:00:00.223023   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.223028   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.223033   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.223038   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.223043   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.223238   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"419","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 17:00:00.223709   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:00.223727   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.223737   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.223745   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.227786   25636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:00:00.227798   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.227803   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.227811   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.227816   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.227821   25636 round_trippers.go:580]     Audit-Id: 2b865e59-f0db-4d4d-9535-75be4b64eda2
	I0919 17:00:00.227826   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.227834   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.228101   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:00.729034   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:00.729059   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.729071   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.729084   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.734345   25636 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 17:00:00.734368   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.734376   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.734381   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.734386   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.734392   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.734397   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.734405   25636 round_trippers.go:580]     Audit-Id: 0e1b3533-19eb-4737-bc79-e2f014c881ec
	I0919 17:00:00.735637   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"419","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 17:00:00.736058   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:00.736070   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:00.736077   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:00.736083   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:00.739651   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:00.739672   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:00.739680   25636 round_trippers.go:580]     Audit-Id: d4fc8882-e891-468e-a7f6-f573ad6f0619
	I0919 17:00:00.739687   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:00.739692   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:00.739697   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:00.739705   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:00.739713   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:00 GMT
	I0919 17:00:00.740396   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:01.229038   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:01.229064   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:01.229072   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:01.229078   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:01.232183   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:01.232203   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:01.232210   25636 round_trippers.go:580]     Audit-Id: 968d9dc4-ebd4-4947-b636-0a7e21276b6c
	I0919 17:00:01.232215   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:01.232220   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:01.232225   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:01.232230   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:01.232239   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:01 GMT
	I0919 17:00:01.232967   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"419","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 17:00:01.233398   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:01.233413   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:01.233421   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:01.233426   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:01.236315   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:01.236334   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:01.236343   25636 round_trippers.go:580]     Audit-Id: 45b75320-628f-4f12-8d36-83b3611a20c7
	I0919 17:00:01.236352   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:01.236361   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:01.236368   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:01.236373   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:01.236378   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:01 GMT
	I0919 17:00:01.236692   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:01.729378   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:01.729402   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:01.729410   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:01.729416   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:01.732825   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:01.732845   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:01.732854   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:01.732863   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:01.732870   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:01.732877   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:01 GMT
	I0919 17:00:01.732885   25636 round_trippers.go:580]     Audit-Id: 5eb072d5-f849-45a1-8197-126f57ef67d6
	I0919 17:00:01.732892   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:01.733075   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"419","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 17:00:01.733485   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:01.733496   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:01.733503   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:01.733508   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:01.736659   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:01.736674   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:01.736681   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:01.736686   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:01.736691   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:01 GMT
	I0919 17:00:01.736696   25636 round_trippers.go:580]     Audit-Id: e57089d3-ee62-4894-94e2-2f5ee5ae993a
	I0919 17:00:01.736700   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:01.736705   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:01.737176   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:02.228839   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:02.228864   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.228872   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.228878   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.231873   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.231902   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.231913   25636 round_trippers.go:580]     Audit-Id: fe838981-6892-46a4-a03f-c7ecdcc17b34
	I0919 17:00:02.231921   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.231928   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.231936   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.231944   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.231952   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.232126   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"434","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0919 17:00:02.232587   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.232600   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.232607   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.232613   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.234881   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.234900   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.234910   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.234918   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.234927   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.234935   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.234942   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.234950   25636 round_trippers.go:580]     Audit-Id: 5ba48131-a631-4b57-9e06-103a80dee2de
	I0919 17:00:02.235492   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:02.235769   25636 pod_ready.go:92] pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:02.235783   25636 pod_ready.go:81] duration metric: took 2.023598833s waiting for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.235793   25636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.235842   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:00:02.235850   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.235856   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.235862   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.238159   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.238176   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.238182   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.238187   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.238192   25636 round_trippers.go:580]     Audit-Id: fabfbad3-a5c5-4e27-924e-b90a38037103
	I0919 17:00:02.238196   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.238201   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.238206   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.238574   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"310","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0919 17:00:02.238896   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.238905   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.238912   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.238917   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.240874   25636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:00:02.240891   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.240897   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.240903   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.240908   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.240913   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.240918   25636 round_trippers.go:580]     Audit-Id: aa07b4c9-9e5a-4daf-9ab2-d2333029ef93
	I0919 17:00:02.240923   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.241060   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:02.241318   25636 pod_ready.go:92] pod "etcd-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:02.241330   25636 pod_ready.go:81] duration metric: took 5.531225ms waiting for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.241341   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.241386   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553715
	I0919 17:00:02.241394   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.241400   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.241407   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.243178   25636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:00:02.243193   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.243199   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.243204   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.243209   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.243214   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.243222   25636 round_trippers.go:580]     Audit-Id: 90c8bc4f-6bd4-49a4-9c00-418fb991da0f
	I0919 17:00:02.243230   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.243382   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553715","namespace":"kube-system","uid":"e2712b6a-6771-4fb1-9b6d-e50e10e45411","resourceVersion":"308","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.mirror":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.seen":"2023-09-19T16:59:41.749099288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0919 17:00:02.243745   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.243756   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.243763   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.243769   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.246235   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.246250   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.246256   25636 round_trippers.go:580]     Audit-Id: 7cb573b4-1f95-4d6d-9793-bf8e1e570c00
	I0919 17:00:02.246261   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.246266   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.246271   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.246277   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.246282   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.246505   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:02.246759   25636 pod_ready.go:92] pod "kube-apiserver-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:02.246771   25636 pod_ready.go:81] duration metric: took 5.423882ms waiting for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.246779   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.246827   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553715
	I0919 17:00:02.246834   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.246841   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.246847   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.250685   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:02.250703   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.250709   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.250715   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.250720   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.250725   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.250731   25636 round_trippers.go:580]     Audit-Id: e742e041-38c9-4022-82bb-574746cc718f
	I0919 17:00:02.250736   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.250881   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553715","namespace":"kube-system","uid":"56eb8685-d2ae-4f50-8da1-dca616585190","resourceVersion":"313","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.mirror":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.seen":"2023-09-19T16:59:41.749100351Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0919 17:00:02.251242   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.251252   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.251259   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.251265   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.253814   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.253832   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.253840   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.253845   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.253850   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.253855   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.253860   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.253865   25636 round_trippers.go:580]     Audit-Id: 8a0c851c-cee7-4997-baed-83ead76097d8
	I0919 17:00:02.254447   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:02.254711   25636 pod_ready.go:92] pod "kube-controller-manager-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:02.254722   25636 pod_ready.go:81] duration metric: took 7.937189ms waiting for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.254732   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.400107   25636 request.go:629] Waited for 145.316455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:00:02.400184   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:00:02.400190   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.400197   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.400203   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.402786   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.402806   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.402816   25636 round_trippers.go:580]     Audit-Id: 4640e157-d646-4d90-98c2-f2e62c147761
	I0919 17:00:02.402822   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.402828   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.402833   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.402838   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.402843   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.402999   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvcz9","generateName":"kube-proxy-","namespace":"kube-system","uid":"377d6478-cda2-47b9-8af8-cff3064e8524","resourceVersion":"404","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0919 17:00:02.600789   25636 request.go:629] Waited for 197.377589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.600844   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.600850   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.600858   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.600863   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.603743   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:02.603768   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.603776   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.603782   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.603787   25636 round_trippers.go:580]     Audit-Id: f4f74984-a169-47b8-b5fe-099ae470b842
	I0919 17:00:02.603792   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.603797   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.603802   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.603982   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:02.604313   25636 pod_ready.go:92] pod "kube-proxy-tvcz9" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:02.604329   25636 pod_ready.go:81] duration metric: took 349.592275ms waiting for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.604339   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:02.800053   25636 request.go:629] Waited for 195.64219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:00:02.800118   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:00:02.800125   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.800133   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.800149   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:02.803321   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:02.803343   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:02.803350   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:02.803356   25636 round_trippers.go:580]     Audit-Id: 76edd9ef-617d-4751-8ada-d1512e9149ac
	I0919 17:00:02.803364   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:02.803369   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:02.803377   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:02.803385   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:02.804140   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553715","namespace":"kube-system","uid":"27c15070-fba4-4237-b6d2-4727af1e5809","resourceVersion":"389","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.mirror":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.seen":"2023-09-19T16:59:41.749088169Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0919 17:00:02.999813   25636 request.go:629] Waited for 195.318848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.999880   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:02.999885   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:02.999893   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:02.999899   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:03.002977   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:03.003001   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:03.003008   25636 round_trippers.go:580]     Audit-Id: 1bd66d8f-d952-45e4-b37c-522722e8845b
	I0919 17:00:03.003020   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:03.003026   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:03.003031   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:03.003036   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:03.003041   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:02 GMT
	I0919 17:00:03.003659   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:03.004017   25636 pod_ready.go:92] pod "kube-scheduler-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:03.004029   25636 pod_ready.go:81] duration metric: took 399.684363ms waiting for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:03.004040   25636 pod_ready.go:38] duration metric: took 2.799955213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:00:03.004052   25636 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:00:03.004097   25636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:00:03.017200   25636 command_runner.go:130] > 1122
	I0919 17:00:03.017247   25636 api_server.go:72] duration metric: took 9.067493174s to wait for apiserver process to appear ...
	I0919 17:00:03.017258   25636 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:00:03.017272   25636 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:00:03.022224   25636 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0919 17:00:03.022285   25636 round_trippers.go:463] GET https://192.168.39.38:8443/version
	I0919 17:00:03.022292   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:03.022300   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:03.022307   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:03.023565   25636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:00:03.023585   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:03.023591   25636 round_trippers.go:580]     Audit-Id: 35f8d982-41c7-45c8-b0b8-057b4aa5fc94
	I0919 17:00:03.023597   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:03.023602   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:03.023607   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:03.023615   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:03.023624   25636 round_trippers.go:580]     Content-Length: 263
	I0919 17:00:03.023629   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:03 GMT
	I0919 17:00:03.023647   25636 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0919 17:00:03.023743   25636 api_server.go:141] control plane version: v1.28.2
	I0919 17:00:03.023760   25636 api_server.go:131] duration metric: took 6.496206ms to wait for apiserver health ...
	I0919 17:00:03.023767   25636 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:00:03.200170   25636 request.go:629] Waited for 176.34693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:00:03.200242   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:00:03.200247   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:03.200255   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:03.200261   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:03.204016   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:03.204041   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:03.204051   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:03.204060   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:03.204067   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:03 GMT
	I0919 17:00:03.204073   25636 round_trippers.go:580]     Audit-Id: 46e92c70-9570-447a-a80c-bc2abb03d697
	I0919 17:00:03.204081   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:03.204089   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:03.205721   25636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"434","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0919 17:00:03.207340   25636 system_pods.go:59] 8 kube-system pods found
	I0919 17:00:03.207366   25636 system_pods.go:61] "coredns-5dd5756b68-pffkm" [fbc226fb-43a9-4e0f-ac99-614f2740485d] Running
	I0919 17:00:03.207373   25636 system_pods.go:61] "etcd-multinode-553715" [905a0370-ab9d-4138-bd11-12297717f1c5] Running
	I0919 17:00:03.207380   25636 system_pods.go:61] "kindnet-lmmc5" [2479ec2b-6cd3-4fb2-b85f-43b175cfbb79] Running
	I0919 17:00:03.207386   25636 system_pods.go:61] "kube-apiserver-multinode-553715" [e2712b6a-6771-4fb1-9b6d-e50e10e45411] Running
	I0919 17:00:03.207393   25636 system_pods.go:61] "kube-controller-manager-multinode-553715" [56eb8685-d2ae-4f50-8da1-dca616585190] Running
	I0919 17:00:03.207399   25636 system_pods.go:61] "kube-proxy-tvcz9" [377d6478-cda2-47b9-8af8-cff3064e8524] Running
	I0919 17:00:03.207406   25636 system_pods.go:61] "kube-scheduler-multinode-553715" [27c15070-fba4-4237-b6d2-4727af1e5809] Running
	I0919 17:00:03.207417   25636 system_pods.go:61] "storage-provisioner" [6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8] Running
	I0919 17:00:03.207425   25636 system_pods.go:74] duration metric: took 183.651479ms to wait for pod list to return data ...
	I0919 17:00:03.207438   25636 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:00:03.399811   25636 request.go:629] Waited for 192.305092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I0919 17:00:03.399881   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I0919 17:00:03.399886   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:03.399893   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:03.399899   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:03.402906   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:03.402933   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:03.402943   25636 round_trippers.go:580]     Audit-Id: 310d66fc-3e34-43c7-8073-87017a973ccf
	I0919 17:00:03.402951   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:03.402958   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:03.402966   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:03.402980   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:03.402990   25636 round_trippers.go:580]     Content-Length: 261
	I0919 17:00:03.403006   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:03 GMT
	I0919 17:00:03.403036   25636 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cfc0cbe8-f46b-4c2d-9338-ce249fe7510f","resourceVersion":"336","creationTimestamp":"2023-09-19T16:59:53Z"}}]}
	I0919 17:00:03.403222   25636 default_sa.go:45] found service account: "default"
	I0919 17:00:03.403240   25636 default_sa.go:55] duration metric: took 195.794171ms for default service account to be created ...
	I0919 17:00:03.403250   25636 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:00:03.600748   25636 request.go:629] Waited for 197.4359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:00:03.600824   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:00:03.600829   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:03.600841   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:03.600851   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:03.604213   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:03.604236   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:03.604249   25636 round_trippers.go:580]     Audit-Id: 31a84f1a-87e7-4b54-9c6f-d6a45d42b88e
	I0919 17:00:03.604257   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:03.604264   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:03.604271   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:03.604279   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:03.604288   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:03 GMT
	I0919 17:00:03.605216   25636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"434","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I0919 17:00:03.606865   25636 system_pods.go:86] 8 kube-system pods found
	I0919 17:00:03.606884   25636 system_pods.go:89] "coredns-5dd5756b68-pffkm" [fbc226fb-43a9-4e0f-ac99-614f2740485d] Running
	I0919 17:00:03.606888   25636 system_pods.go:89] "etcd-multinode-553715" [905a0370-ab9d-4138-bd11-12297717f1c5] Running
	I0919 17:00:03.606893   25636 system_pods.go:89] "kindnet-lmmc5" [2479ec2b-6cd3-4fb2-b85f-43b175cfbb79] Running
	I0919 17:00:03.606897   25636 system_pods.go:89] "kube-apiserver-multinode-553715" [e2712b6a-6771-4fb1-9b6d-e50e10e45411] Running
	I0919 17:00:03.606901   25636 system_pods.go:89] "kube-controller-manager-multinode-553715" [56eb8685-d2ae-4f50-8da1-dca616585190] Running
	I0919 17:00:03.606905   25636 system_pods.go:89] "kube-proxy-tvcz9" [377d6478-cda2-47b9-8af8-cff3064e8524] Running
	I0919 17:00:03.606910   25636 system_pods.go:89] "kube-scheduler-multinode-553715" [27c15070-fba4-4237-b6d2-4727af1e5809] Running
	I0919 17:00:03.606914   25636 system_pods.go:89] "storage-provisioner" [6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8] Running
	I0919 17:00:03.606920   25636 system_pods.go:126] duration metric: took 203.664227ms to wait for k8s-apps to be running ...
	I0919 17:00:03.606928   25636 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:00:03.606966   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:00:03.619488   25636 system_svc.go:56] duration metric: took 12.550776ms WaitForService to wait for kubelet.
	I0919 17:00:03.619511   25636 kubeadm.go:581] duration metric: took 9.669759745s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:00:03.619529   25636 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:00:03.799888   25636 request.go:629] Waited for 180.294857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I0919 17:00:03.799960   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I0919 17:00:03.799966   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:03.799973   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:03.799983   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:03.803302   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:03.803327   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:03.803337   25636 round_trippers.go:580]     Audit-Id: cb670c1a-0268-441f-b2d9-36e729f87b56
	I0919 17:00:03.803345   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:03.803354   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:03.803364   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:03.803374   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:03.803380   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:03 GMT
	I0919 17:00:03.803669   25636 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I0919 17:00:03.804008   25636 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:00:03.804029   25636 node_conditions.go:123] node cpu capacity is 2
	I0919 17:00:03.804041   25636 node_conditions.go:105] duration metric: took 184.506912ms to run NodePressure ...
	I0919 17:00:03.804062   25636 start.go:228] waiting for startup goroutines ...
	I0919 17:00:03.804076   25636 start.go:233] waiting for cluster config update ...
	I0919 17:00:03.804090   25636 start.go:242] writing updated cluster config ...
	I0919 17:00:03.806383   25636 out.go:177] 
	I0919 17:00:03.808106   25636 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:00:03.808169   25636 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:00:03.809917   25636 out.go:177] * Starting worker node multinode-553715-m02 in cluster multinode-553715
	I0919 17:00:03.811231   25636 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:00:03.811247   25636 cache.go:57] Caching tarball of preloaded images
	I0919 17:00:03.811344   25636 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:00:03.811358   25636 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:00:03.811417   25636 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:00:03.811555   25636 start.go:365] acquiring machines lock for multinode-553715-m02: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:00:03.811598   25636 start.go:369] acquired machines lock for "multinode-553715-m02" in 22.891µs
	I0919 17:00:03.811619   25636 start.go:93] Provisioning new machine with config: &{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:00:03.811678   25636 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0919 17:00:03.813354   25636 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 17:00:03.813429   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:00:03.813460   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:00:03.827595   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40777
	I0919 17:00:03.828073   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:00:03.828643   25636 main.go:141] libmachine: Using API Version  1
	I0919 17:00:03.828669   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:00:03.829026   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:00:03.829199   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:00:03.829370   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:03.829540   25636 start.go:159] libmachine.API.Create for "multinode-553715" (driver="kvm2")
	I0919 17:00:03.829569   25636 client.go:168] LocalClient.Create starting
	I0919 17:00:03.829605   25636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 17:00:03.829645   25636 main.go:141] libmachine: Decoding PEM data...
	I0919 17:00:03.829669   25636 main.go:141] libmachine: Parsing certificate...
	I0919 17:00:03.829735   25636 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 17:00:03.829762   25636 main.go:141] libmachine: Decoding PEM data...
	I0919 17:00:03.829782   25636 main.go:141] libmachine: Parsing certificate...
	I0919 17:00:03.829809   25636 main.go:141] libmachine: Running pre-create checks...
	I0919 17:00:03.829820   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .PreCreateCheck
	I0919 17:00:03.830018   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetConfigRaw
	I0919 17:00:03.830451   25636 main.go:141] libmachine: Creating machine...
	I0919 17:00:03.830470   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .Create
	I0919 17:00:03.830627   25636 main.go:141] libmachine: (multinode-553715-m02) Creating KVM machine...
	I0919 17:00:03.831876   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found existing default KVM network
	I0919 17:00:03.831933   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found existing private KVM network mk-multinode-553715
	I0919 17:00:03.832067   25636 main.go:141] libmachine: (multinode-553715-m02) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02 ...
	I0919 17:00:03.832095   25636 main.go:141] libmachine: (multinode-553715-m02) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 17:00:03.832160   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:03.832042   25997 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:00:03.832243   25636 main.go:141] libmachine: (multinode-553715-m02) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 17:00:04.043158   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:04.043018   25997 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa...
	I0919 17:00:04.245978   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:04.245876   25997 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/multinode-553715-m02.rawdisk...
	I0919 17:00:04.246007   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Writing magic tar header
	I0919 17:00:04.246019   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Writing SSH key tar header
	I0919 17:00:04.246028   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:04.245982   25997 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02 ...
	I0919 17:00:04.246146   25636 main.go:141] libmachine: (multinode-553715-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02 (perms=drwx------)
	I0919 17:00:04.246182   25636 main.go:141] libmachine: (multinode-553715-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 17:00:04.246198   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02
	I0919 17:00:04.246216   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 17:00:04.246234   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:00:04.246256   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 17:00:04.246278   25636 main.go:141] libmachine: (multinode-553715-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 17:00:04.246294   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 17:00:04.246316   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home/jenkins
	I0919 17:00:04.246333   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Checking permissions on dir: /home
	I0919 17:00:04.246353   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Skipping /home - not owner
	I0919 17:00:04.246371   25636 main.go:141] libmachine: (multinode-553715-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 17:00:04.246391   25636 main.go:141] libmachine: (multinode-553715-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 17:00:04.246406   25636 main.go:141] libmachine: (multinode-553715-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 17:00:04.246423   25636 main.go:141] libmachine: (multinode-553715-m02) Creating domain...
	I0919 17:00:04.247119   25636 main.go:141] libmachine: (multinode-553715-m02) define libvirt domain using xml: 
	I0919 17:00:04.247139   25636 main.go:141] libmachine: (multinode-553715-m02) <domain type='kvm'>
	I0919 17:00:04.247151   25636 main.go:141] libmachine: (multinode-553715-m02)   <name>multinode-553715-m02</name>
	I0919 17:00:04.247161   25636 main.go:141] libmachine: (multinode-553715-m02)   <memory unit='MiB'>2200</memory>
	I0919 17:00:04.247176   25636 main.go:141] libmachine: (multinode-553715-m02)   <vcpu>2</vcpu>
	I0919 17:00:04.247186   25636 main.go:141] libmachine: (multinode-553715-m02)   <features>
	I0919 17:00:04.247192   25636 main.go:141] libmachine: (multinode-553715-m02)     <acpi/>
	I0919 17:00:04.247200   25636 main.go:141] libmachine: (multinode-553715-m02)     <apic/>
	I0919 17:00:04.247206   25636 main.go:141] libmachine: (multinode-553715-m02)     <pae/>
	I0919 17:00:04.247217   25636 main.go:141] libmachine: (multinode-553715-m02)     
	I0919 17:00:04.247227   25636 main.go:141] libmachine: (multinode-553715-m02)   </features>
	I0919 17:00:04.247233   25636 main.go:141] libmachine: (multinode-553715-m02)   <cpu mode='host-passthrough'>
	I0919 17:00:04.247259   25636 main.go:141] libmachine: (multinode-553715-m02)   
	I0919 17:00:04.247282   25636 main.go:141] libmachine: (multinode-553715-m02)   </cpu>
	I0919 17:00:04.247295   25636 main.go:141] libmachine: (multinode-553715-m02)   <os>
	I0919 17:00:04.247309   25636 main.go:141] libmachine: (multinode-553715-m02)     <type>hvm</type>
	I0919 17:00:04.247323   25636 main.go:141] libmachine: (multinode-553715-m02)     <boot dev='cdrom'/>
	I0919 17:00:04.247337   25636 main.go:141] libmachine: (multinode-553715-m02)     <boot dev='hd'/>
	I0919 17:00:04.247352   25636 main.go:141] libmachine: (multinode-553715-m02)     <bootmenu enable='no'/>
	I0919 17:00:04.247369   25636 main.go:141] libmachine: (multinode-553715-m02)   </os>
	I0919 17:00:04.247384   25636 main.go:141] libmachine: (multinode-553715-m02)   <devices>
	I0919 17:00:04.247398   25636 main.go:141] libmachine: (multinode-553715-m02)     <disk type='file' device='cdrom'>
	I0919 17:00:04.247418   25636 main.go:141] libmachine: (multinode-553715-m02)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/boot2docker.iso'/>
	I0919 17:00:04.247432   25636 main.go:141] libmachine: (multinode-553715-m02)       <target dev='hdc' bus='scsi'/>
	I0919 17:00:04.247458   25636 main.go:141] libmachine: (multinode-553715-m02)       <readonly/>
	I0919 17:00:04.247479   25636 main.go:141] libmachine: (multinode-553715-m02)     </disk>
	I0919 17:00:04.247496   25636 main.go:141] libmachine: (multinode-553715-m02)     <disk type='file' device='disk'>
	I0919 17:00:04.247518   25636 main.go:141] libmachine: (multinode-553715-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 17:00:04.247540   25636 main.go:141] libmachine: (multinode-553715-m02)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/multinode-553715-m02.rawdisk'/>
	I0919 17:00:04.247554   25636 main.go:141] libmachine: (multinode-553715-m02)       <target dev='hda' bus='virtio'/>
	I0919 17:00:04.247567   25636 main.go:141] libmachine: (multinode-553715-m02)     </disk>
	I0919 17:00:04.247577   25636 main.go:141] libmachine: (multinode-553715-m02)     <interface type='network'>
	I0919 17:00:04.247596   25636 main.go:141] libmachine: (multinode-553715-m02)       <source network='mk-multinode-553715'/>
	I0919 17:00:04.247616   25636 main.go:141] libmachine: (multinode-553715-m02)       <model type='virtio'/>
	I0919 17:00:04.247631   25636 main.go:141] libmachine: (multinode-553715-m02)     </interface>
	I0919 17:00:04.247645   25636 main.go:141] libmachine: (multinode-553715-m02)     <interface type='network'>
	I0919 17:00:04.247660   25636 main.go:141] libmachine: (multinode-553715-m02)       <source network='default'/>
	I0919 17:00:04.247673   25636 main.go:141] libmachine: (multinode-553715-m02)       <model type='virtio'/>
	I0919 17:00:04.247683   25636 main.go:141] libmachine: (multinode-553715-m02)     </interface>
	I0919 17:00:04.247703   25636 main.go:141] libmachine: (multinode-553715-m02)     <serial type='pty'>
	I0919 17:00:04.247720   25636 main.go:141] libmachine: (multinode-553715-m02)       <target port='0'/>
	I0919 17:00:04.247733   25636 main.go:141] libmachine: (multinode-553715-m02)     </serial>
	I0919 17:00:04.247745   25636 main.go:141] libmachine: (multinode-553715-m02)     <console type='pty'>
	I0919 17:00:04.247756   25636 main.go:141] libmachine: (multinode-553715-m02)       <target type='serial' port='0'/>
	I0919 17:00:04.247768   25636 main.go:141] libmachine: (multinode-553715-m02)     </console>
	I0919 17:00:04.247789   25636 main.go:141] libmachine: (multinode-553715-m02)     <rng model='virtio'>
	I0919 17:00:04.247803   25636 main.go:141] libmachine: (multinode-553715-m02)       <backend model='random'>/dev/random</backend>
	I0919 17:00:04.247816   25636 main.go:141] libmachine: (multinode-553715-m02)     </rng>
	I0919 17:00:04.247828   25636 main.go:141] libmachine: (multinode-553715-m02)     
	I0919 17:00:04.247838   25636 main.go:141] libmachine: (multinode-553715-m02)     
	I0919 17:00:04.247845   25636 main.go:141] libmachine: (multinode-553715-m02)   </devices>
	I0919 17:00:04.247858   25636 main.go:141] libmachine: (multinode-553715-m02) </domain>
	I0919 17:00:04.247867   25636 main.go:141] libmachine: (multinode-553715-m02) 
	I0919 17:00:04.254295   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:d8:df:6b in network default
	I0919 17:00:04.254854   25636 main.go:141] libmachine: (multinode-553715-m02) Ensuring networks are active...
	I0919 17:00:04.254875   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:04.255499   25636 main.go:141] libmachine: (multinode-553715-m02) Ensuring network default is active
	I0919 17:00:04.255786   25636 main.go:141] libmachine: (multinode-553715-m02) Ensuring network mk-multinode-553715 is active
	I0919 17:00:04.256172   25636 main.go:141] libmachine: (multinode-553715-m02) Getting domain xml...
	I0919 17:00:04.256833   25636 main.go:141] libmachine: (multinode-553715-m02) Creating domain...
	I0919 17:00:05.501990   25636 main.go:141] libmachine: (multinode-553715-m02) Waiting to get IP...
	I0919 17:00:05.502770   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:05.503136   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:05.503163   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:05.503092   25997 retry.go:31] will retry after 298.117515ms: waiting for machine to come up
	I0919 17:00:05.802688   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:05.803064   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:05.803089   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:05.803037   25997 retry.go:31] will retry after 319.408507ms: waiting for machine to come up
	I0919 17:00:06.124450   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:06.124964   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:06.124988   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:06.124889   25997 retry.go:31] will retry after 377.392512ms: waiting for machine to come up
	I0919 17:00:06.503472   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:06.503865   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:06.503894   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:06.503821   25997 retry.go:31] will retry after 387.585935ms: waiting for machine to come up
	I0919 17:00:06.893257   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:06.893624   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:06.893645   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:06.893579   25997 retry.go:31] will retry after 635.587714ms: waiting for machine to come up
	I0919 17:00:07.530433   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:07.530860   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:07.530890   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:07.530805   25997 retry.go:31] will retry after 638.165685ms: waiting for machine to come up
	I0919 17:00:08.170510   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:08.170850   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:08.170879   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:08.170797   25997 retry.go:31] will retry after 777.120092ms: waiting for machine to come up
	I0919 17:00:08.948976   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:08.949455   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:08.949496   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:08.949425   25997 retry.go:31] will retry after 1.034868245s: waiting for machine to come up
	I0919 17:00:09.985849   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:09.986291   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:09.986322   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:09.986237   25997 retry.go:31] will retry after 1.856597336s: waiting for machine to come up
	I0919 17:00:11.845241   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:11.845649   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:11.845673   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:11.845598   25997 retry.go:31] will retry after 2.249455497s: waiting for machine to come up
	I0919 17:00:14.096395   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:14.096826   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:14.096858   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:14.096777   25997 retry.go:31] will retry after 2.08560295s: waiting for machine to come up
	I0919 17:00:16.184860   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:16.185272   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:16.185304   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:16.185221   25997 retry.go:31] will retry after 3.307890988s: waiting for machine to come up
	I0919 17:00:19.494088   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:19.494547   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:19.494577   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:19.494490   25997 retry.go:31] will retry after 3.757338063s: waiting for machine to come up
	I0919 17:00:23.256265   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:23.256689   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find current IP address of domain multinode-553715-m02 in network mk-multinode-553715
	I0919 17:00:23.256711   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | I0919 17:00:23.256644   25997 retry.go:31] will retry after 4.580096572s: waiting for machine to come up
	I0919 17:00:27.841690   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:27.842179   25636 main.go:141] libmachine: (multinode-553715-m02) Found IP for machine: 192.168.39.11
	I0919 17:00:27.842218   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has current primary IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:27.842230   25636 main.go:141] libmachine: (multinode-553715-m02) Reserving static IP address...
	I0919 17:00:27.842583   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | unable to find host DHCP lease matching {name: "multinode-553715-m02", mac: "52:54:00:b9:f9:a1", ip: "192.168.39.11"} in network mk-multinode-553715
	I0919 17:00:27.913003   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Getting to WaitForSSH function...
	I0919 17:00:27.913034   25636 main.go:141] libmachine: (multinode-553715-m02) Reserved static IP address: 192.168.39.11
	I0919 17:00:27.913048   25636 main.go:141] libmachine: (multinode-553715-m02) Waiting for SSH to be available...
	I0919 17:00:27.915086   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:27.915474   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:27.915514   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:27.915582   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Using SSH client type: external
	I0919 17:00:27.915611   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa (-rw-------)
	I0919 17:00:27.915638   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:00:27.915650   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | About to run SSH command:
	I0919 17:00:27.915659   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | exit 0
	I0919 17:00:28.004518   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | SSH cmd err, output: <nil>: 
	I0919 17:00:28.004732   25636 main.go:141] libmachine: (multinode-553715-m02) KVM machine creation complete!
	I0919 17:00:28.005029   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetConfigRaw
	I0919 17:00:28.005574   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:28.005759   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:28.005885   25636 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 17:00:28.005908   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetState
	I0919 17:00:28.007110   25636 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 17:00:28.007128   25636 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 17:00:28.007138   25636 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 17:00:28.007148   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.009521   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.009901   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.009934   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.010089   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:28.010303   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.010482   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.010687   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:28.010862   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 17:00:28.011241   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:00:28.011255   25636 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 17:00:28.123555   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:00:28.123576   25636 main.go:141] libmachine: Detecting the provisioner...
	I0919 17:00:28.123584   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.126492   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.126782   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.126812   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.127071   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:28.127263   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.127437   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.127559   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:28.127704   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 17:00:28.128118   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:00:28.128136   25636 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 17:00:28.241144   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 17:00:28.241213   25636 main.go:141] libmachine: found compatible host: buildroot
	I0919 17:00:28.241228   25636 main.go:141] libmachine: Provisioning with buildroot...
	I0919 17:00:28.241244   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:00:28.241477   25636 buildroot.go:166] provisioning hostname "multinode-553715-m02"
	I0919 17:00:28.241500   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:00:28.241729   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.244467   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.244865   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.244900   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.245059   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:28.245241   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.245386   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.245530   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:28.245728   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 17:00:28.246112   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:00:28.246127   25636 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553715-m02 && echo "multinode-553715-m02" | sudo tee /etc/hostname
	I0919 17:00:28.370057   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553715-m02
	
	I0919 17:00:28.370125   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.372821   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.373176   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.373207   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.373322   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:28.373498   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.373668   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.373788   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:28.373971   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 17:00:28.374346   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:00:28.374367   25636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553715-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553715-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553715-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:00:28.492042   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:00:28.492068   25636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:00:28.492086   25636 buildroot.go:174] setting up certificates
	I0919 17:00:28.492103   25636 provision.go:83] configureAuth start
	I0919 17:00:28.492115   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:00:28.492356   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:00:28.495104   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.495478   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.495511   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.495664   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.497713   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.498002   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.498034   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.498112   25636 provision.go:138] copyHostCerts
	I0919 17:00:28.498151   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:00:28.498189   25636 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:00:28.498202   25636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:00:28.498283   25636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:00:28.498369   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:00:28.498394   25636 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:00:28.498402   25636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:00:28.498438   25636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:00:28.498497   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:00:28.498520   25636 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:00:28.498529   25636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:00:28.498560   25636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:00:28.498659   25636 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.multinode-553715-m02 san=[192.168.39.11 192.168.39.11 localhost 127.0.0.1 minikube multinode-553715-m02]
	I0919 17:00:28.753740   25636 provision.go:172] copyRemoteCerts
	I0919 17:00:28.753795   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:00:28.753818   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.756257   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.756621   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.756650   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.756813   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:28.757003   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.757139   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:28.757265   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:00:28.842094   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 17:00:28.842169   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:00:28.864358   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 17:00:28.864434   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0919 17:00:28.886050   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 17:00:28.886133   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:00:28.908195   25636 provision.go:86] duration metric: configureAuth took 416.079324ms
	I0919 17:00:28.908218   25636 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:00:28.908425   25636 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:00:28.908502   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:28.911006   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.911331   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:28.911365   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:28.911508   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:28.911703   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.911841   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:28.911952   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:28.912091   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 17:00:28.912575   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:00:28.912602   25636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:00:29.208955   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:00:29.208982   25636 main.go:141] libmachine: Checking connection to Docker...
	I0919 17:00:29.209015   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetURL
	I0919 17:00:29.210214   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | Using libvirt version 6000000
	I0919 17:00:29.212485   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.212883   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.212916   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.213127   25636 main.go:141] libmachine: Docker is up and running!
	I0919 17:00:29.213159   25636 main.go:141] libmachine: Reticulating splines...
	I0919 17:00:29.213166   25636 client.go:171] LocalClient.Create took 25.383587881s
	I0919 17:00:29.213188   25636 start.go:167] duration metric: libmachine.API.Create for "multinode-553715" took 25.383648288s
	I0919 17:00:29.213198   25636 start.go:300] post-start starting for "multinode-553715-m02" (driver="kvm2")
	I0919 17:00:29.213207   25636 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:00:29.213225   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:29.213464   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:00:29.213488   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:29.215859   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.216195   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.216223   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.216359   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:29.216565   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:29.216724   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:29.216849   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:00:29.301596   25636 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:00:29.305396   25636 command_runner.go:130] > NAME=Buildroot
	I0919 17:00:29.305413   25636 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 17:00:29.305418   25636 command_runner.go:130] > ID=buildroot
	I0919 17:00:29.305423   25636 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 17:00:29.305428   25636 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 17:00:29.305508   25636 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:00:29.305530   25636 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:00:29.305600   25636 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:00:29.305701   25636 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:00:29.305715   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /etc/ssl/certs/132392.pem
	I0919 17:00:29.305814   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:00:29.314392   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:00:29.336729   25636 start.go:303] post-start completed in 123.517166ms
	I0919 17:00:29.336776   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetConfigRaw
	I0919 17:00:29.337327   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:00:29.339790   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.340115   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.340151   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.340316   25636 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:00:29.340530   25636 start.go:128] duration metric: createHost completed in 25.528843038s
	I0919 17:00:29.340557   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:29.342715   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.343029   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.343066   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.343159   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:29.343304   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:29.343428   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:29.343562   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:29.343715   25636 main.go:141] libmachine: Using SSH client type: native
	I0919 17:00:29.344005   25636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:00:29.344017   25636 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:00:29.460969   25636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142829.433073451
	
	I0919 17:00:29.460994   25636 fix.go:206] guest clock: 1695142829.433073451
	I0919 17:00:29.461003   25636 fix.go:219] Guest: 2023-09-19 17:00:29.433073451 +0000 UTC Remote: 2023-09-19 17:00:29.340541957 +0000 UTC m=+91.721937534 (delta=92.531494ms)
	I0919 17:00:29.461022   25636 fix.go:190] guest clock delta is within tolerance: 92.531494ms
	I0919 17:00:29.461028   25636 start.go:83] releasing machines lock for "multinode-553715-m02", held for 25.649418942s
	I0919 17:00:29.461051   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:29.461322   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:00:29.463878   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.464174   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.464218   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.466410   25636 out.go:177] * Found network options:
	I0919 17:00:29.467859   25636 out.go:177]   - NO_PROXY=192.168.39.38
	W0919 17:00:29.469252   25636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 17:00:29.469281   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:29.469781   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:29.469975   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:00:29.470083   25636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:00:29.470121   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	W0919 17:00:29.470144   25636 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 17:00:29.470206   25636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:00:29.470220   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:00:29.472527   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.472855   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.472891   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.472918   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.473096   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:29.473269   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:29.473316   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:29.473358   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:29.473382   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:29.473476   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:00:29.473538   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:00:29.473583   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:00:29.473664   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:00:29.473796   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:00:29.583090   25636 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 17:00:29.710843   25636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 17:00:29.716731   25636 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 17:00:29.717007   25636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:00:29.717066   25636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:00:29.732816   25636 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0919 17:00:29.732884   25636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:00:29.732896   25636 start.go:469] detecting cgroup driver to use...
	I0919 17:00:29.732961   25636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:00:29.745990   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:00:29.757882   25636 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:00:29.757934   25636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:00:29.770065   25636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:00:29.782079   25636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:00:29.881374   25636 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0919 17:00:29.881456   25636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:00:29.992509   25636 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0919 17:00:29.992547   25636 docker.go:212] disabling docker service ...
	I0919 17:00:29.992601   25636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:00:30.006048   25636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:00:30.017679   25636 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0919 17:00:30.018061   25636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:00:30.121772   25636 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0919 17:00:30.121847   25636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:00:30.133824   25636 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0919 17:00:30.134101   25636 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0919 17:00:30.230001   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:00:30.242893   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:00:30.259643   25636 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 17:00:30.259684   25636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 17:00:30.259738   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:00:30.268539   25636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:00:30.268593   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:00:30.277228   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:00:30.285850   25636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:00:30.295024   25636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:00:30.304379   25636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:00:30.312734   25636 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:00:30.312771   25636 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:00:30.312819   25636 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:00:30.327572   25636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:00:30.337012   25636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:00:30.453057   25636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:00:30.627109   25636 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:00:30.627187   25636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:00:30.632138   25636 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 17:00:30.632159   25636 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 17:00:30.632171   25636 command_runner.go:130] > Device: 16h/22d	Inode: 708         Links: 1
	I0919 17:00:30.632182   25636 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:00:30.632190   25636 command_runner.go:130] > Access: 2023-09-19 17:00:30.586388315 +0000
	I0919 17:00:30.632199   25636 command_runner.go:130] > Modify: 2023-09-19 17:00:30.586388315 +0000
	I0919 17:00:30.632207   25636 command_runner.go:130] > Change: 2023-09-19 17:00:30.586388315 +0000
	I0919 17:00:30.632213   25636 command_runner.go:130] >  Birth: -
	I0919 17:00:30.632625   25636 start.go:537] Will wait 60s for crictl version
	I0919 17:00:30.632674   25636 ssh_runner.go:195] Run: which crictl
	I0919 17:00:30.636120   25636 command_runner.go:130] > /usr/bin/crictl
	I0919 17:00:30.636432   25636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:00:30.675958   25636 command_runner.go:130] > Version:  0.1.0
	I0919 17:00:30.675985   25636 command_runner.go:130] > RuntimeName:  cri-o
	I0919 17:00:30.675992   25636 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0919 17:00:30.676000   25636 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 17:00:30.677468   25636 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:00:30.677533   25636 ssh_runner.go:195] Run: crio --version
	I0919 17:00:30.722604   25636 command_runner.go:130] > crio version 1.24.1
	I0919 17:00:30.722626   25636 command_runner.go:130] > Version:          1.24.1
	I0919 17:00:30.722633   25636 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:00:30.722637   25636 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:00:30.722649   25636 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:00:30.722653   25636 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:00:30.722657   25636 command_runner.go:130] > Compiler:         gc
	I0919 17:00:30.722662   25636 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:00:30.722667   25636 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:00:30.722674   25636 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:00:30.722678   25636 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:00:30.722682   25636 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:00:30.724109   25636 ssh_runner.go:195] Run: crio --version
	I0919 17:00:30.771293   25636 command_runner.go:130] > crio version 1.24.1
	I0919 17:00:30.771317   25636 command_runner.go:130] > Version:          1.24.1
	I0919 17:00:30.771324   25636 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:00:30.771329   25636 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:00:30.771335   25636 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:00:30.771339   25636 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:00:30.771344   25636 command_runner.go:130] > Compiler:         gc
	I0919 17:00:30.771349   25636 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:00:30.771353   25636 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:00:30.771360   25636 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:00:30.771368   25636 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:00:30.771375   25636 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:00:30.775401   25636 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 17:00:30.776957   25636 out.go:177]   - env NO_PROXY=192.168.39.38
	I0919 17:00:30.778312   25636 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:00:30.780766   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:30.781152   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:00:30.781187   25636 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:00:30.781294   25636 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 17:00:30.785519   25636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:00:30.798092   25636 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715 for IP: 192.168.39.11
	I0919 17:00:30.798125   25636 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:00:30.798256   25636 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:00:30.798308   25636 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:00:30.798325   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 17:00:30.798344   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 17:00:30.798359   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 17:00:30.798375   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 17:00:30.798442   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:00:30.798482   25636 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:00:30.798498   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:00:30.798534   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:00:30.798567   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:00:30.798602   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:00:30.798652   25636 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:00:30.798688   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /usr/share/ca-certificates/132392.pem
	I0919 17:00:30.798708   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:00:30.798723   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem -> /usr/share/ca-certificates/13239.pem
	I0919 17:00:30.799324   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:00:30.822265   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:00:30.844768   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:00:30.866737   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:00:30.888439   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:00:30.910236   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:00:30.932128   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:00:30.954101   25636 ssh_runner.go:195] Run: openssl version
	I0919 17:00:30.959128   25636 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 17:00:30.959381   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:00:30.969055   25636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:00:30.973440   25636 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:00:30.973690   25636 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:00:30.973729   25636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:00:30.979107   25636 command_runner.go:130] > 51391683
	I0919 17:00:30.979168   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:00:30.988330   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:00:30.997634   25636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:00:31.001919   25636 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:00:31.002036   25636 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:00:31.002079   25636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:00:31.007419   25636 command_runner.go:130] > 3ec20f2e
	I0919 17:00:31.007463   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:00:31.016632   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:00:31.026161   25636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:00:31.030470   25636 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:00:31.030691   25636 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:00:31.030732   25636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:00:31.035871   25636 command_runner.go:130] > b5213941
	I0919 17:00:31.036114   25636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:00:31.045380   25636 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:00:31.049406   25636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:00:31.049435   25636 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:00:31.049512   25636 ssh_runner.go:195] Run: crio config
	I0919 17:00:31.103896   25636 command_runner.go:130] ! time="2023-09-19 17:00:31.078642597Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0919 17:00:31.103927   25636 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 17:00:31.115908   25636 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 17:00:31.115939   25636 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 17:00:31.115951   25636 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 17:00:31.115956   25636 command_runner.go:130] > #
	I0919 17:00:31.115970   25636 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 17:00:31.115980   25636 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 17:00:31.115999   25636 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 17:00:31.116008   25636 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 17:00:31.116013   25636 command_runner.go:130] > # reload'.
	I0919 17:00:31.116023   25636 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 17:00:31.116036   25636 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 17:00:31.116055   25636 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 17:00:31.116067   25636 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 17:00:31.116075   25636 command_runner.go:130] > [crio]
	I0919 17:00:31.116084   25636 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 17:00:31.116091   25636 command_runner.go:130] > # containers images, in this directory.
	I0919 17:00:31.116099   25636 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 17:00:31.116117   25636 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 17:00:31.116131   25636 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 17:00:31.116140   25636 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 17:00:31.116148   25636 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 17:00:31.116155   25636 command_runner.go:130] > storage_driver = "overlay"
	I0919 17:00:31.116165   25636 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 17:00:31.116174   25636 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 17:00:31.116183   25636 command_runner.go:130] > storage_option = [
	I0919 17:00:31.116192   25636 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 17:00:31.116199   25636 command_runner.go:130] > ]
	I0919 17:00:31.116208   25636 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 17:00:31.116220   25636 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 17:00:31.116226   25636 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 17:00:31.116236   25636 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 17:00:31.116244   25636 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 17:00:31.116254   25636 command_runner.go:130] > # always happen on a node reboot
	I0919 17:00:31.116262   25636 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 17:00:31.116271   25636 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 17:00:31.116283   25636 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 17:00:31.116297   25636 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 17:00:31.116308   25636 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0919 17:00:31.116320   25636 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 17:00:31.116335   25636 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 17:00:31.116346   25636 command_runner.go:130] > # internal_wipe = true
	I0919 17:00:31.116355   25636 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 17:00:31.116363   25636 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 17:00:31.116375   25636 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 17:00:31.116390   25636 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 17:00:31.116403   25636 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 17:00:31.116426   25636 command_runner.go:130] > [crio.api]
	I0919 17:00:31.116439   25636 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 17:00:31.116447   25636 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 17:00:31.116456   25636 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 17:00:31.116468   25636 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 17:00:31.116481   25636 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 17:00:31.116492   25636 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 17:00:31.116501   25636 command_runner.go:130] > # stream_port = "0"
	I0919 17:00:31.116509   25636 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 17:00:31.116518   25636 command_runner.go:130] > # stream_enable_tls = false
	I0919 17:00:31.116527   25636 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 17:00:31.116536   25636 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 17:00:31.116545   25636 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 17:00:31.116558   25636 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 17:00:31.116567   25636 command_runner.go:130] > # minutes.
	I0919 17:00:31.116573   25636 command_runner.go:130] > # stream_tls_cert = ""
	I0919 17:00:31.116585   25636 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 17:00:31.116597   25636 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 17:00:31.116606   25636 command_runner.go:130] > # stream_tls_key = ""
	I0919 17:00:31.116615   25636 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 17:00:31.116627   25636 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 17:00:31.116637   25636 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 17:00:31.116646   25636 command_runner.go:130] > # stream_tls_ca = ""
	I0919 17:00:31.116657   25636 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:00:31.116667   25636 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 17:00:31.116680   25636 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:00:31.116691   25636 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 17:00:31.116720   25636 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 17:00:31.116733   25636 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 17:00:31.116743   25636 command_runner.go:130] > [crio.runtime]
	I0919 17:00:31.116754   25636 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 17:00:31.116766   25636 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 17:00:31.116775   25636 command_runner.go:130] > # "nofile=1024:2048"
	I0919 17:00:31.116787   25636 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 17:00:31.116795   25636 command_runner.go:130] > # default_ulimits = [
	I0919 17:00:31.116801   25636 command_runner.go:130] > # ]
	I0919 17:00:31.116813   25636 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 17:00:31.116822   25636 command_runner.go:130] > # no_pivot = false
	I0919 17:00:31.116834   25636 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 17:00:31.116846   25636 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 17:00:31.116859   25636 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 17:00:31.116871   25636 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 17:00:31.116882   25636 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 17:00:31.116896   25636 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:00:31.116907   25636 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 17:00:31.116917   25636 command_runner.go:130] > # Cgroup setting for conmon
	I0919 17:00:31.116931   25636 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 17:00:31.116941   25636 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 17:00:31.116952   25636 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 17:00:31.116962   25636 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 17:00:31.116975   25636 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:00:31.116984   25636 command_runner.go:130] > conmon_env = [
	I0919 17:00:31.116996   25636 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 17:00:31.117004   25636 command_runner.go:130] > ]
	I0919 17:00:31.117037   25636 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 17:00:31.117061   25636 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 17:00:31.117074   25636 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 17:00:31.117084   25636 command_runner.go:130] > # default_env = [
	I0919 17:00:31.117090   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117102   25636 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 17:00:31.117110   25636 command_runner.go:130] > # selinux = false
	I0919 17:00:31.117120   25636 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 17:00:31.117131   25636 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 17:00:31.117142   25636 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 17:00:31.117148   25636 command_runner.go:130] > # seccomp_profile = ""
	I0919 17:00:31.117160   25636 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 17:00:31.117171   25636 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 17:00:31.117184   25636 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 17:00:31.117195   25636 command_runner.go:130] > # which might increase security.
	I0919 17:00:31.117206   25636 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 17:00:31.117219   25636 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 17:00:31.117232   25636 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 17:00:31.117245   25636 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 17:00:31.117258   25636 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 17:00:31.117269   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:00:31.117279   25636 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 17:00:31.117292   25636 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 17:00:31.117302   25636 command_runner.go:130] > # the cgroup blockio controller.
	I0919 17:00:31.117312   25636 command_runner.go:130] > # blockio_config_file = ""
	I0919 17:00:31.117325   25636 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 17:00:31.117336   25636 command_runner.go:130] > # irqbalance daemon.
	I0919 17:00:31.117344   25636 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 17:00:31.117357   25636 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 17:00:31.117368   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:00:31.117379   25636 command_runner.go:130] > # rdt_config_file = ""
	I0919 17:00:31.117391   25636 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 17:00:31.117402   25636 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 17:00:31.117414   25636 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 17:00:31.117425   25636 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 17:00:31.117435   25636 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 17:00:31.117448   25636 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 17:00:31.117458   25636 command_runner.go:130] > # will be added.
	I0919 17:00:31.117468   25636 command_runner.go:130] > # default_capabilities = [
	I0919 17:00:31.117477   25636 command_runner.go:130] > # 	"CHOWN",
	I0919 17:00:31.117489   25636 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 17:00:31.117497   25636 command_runner.go:130] > # 	"FSETID",
	I0919 17:00:31.117504   25636 command_runner.go:130] > # 	"FOWNER",
	I0919 17:00:31.117511   25636 command_runner.go:130] > # 	"SETGID",
	I0919 17:00:31.117517   25636 command_runner.go:130] > # 	"SETUID",
	I0919 17:00:31.117526   25636 command_runner.go:130] > # 	"SETPCAP",
	I0919 17:00:31.117536   25636 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 17:00:31.117542   25636 command_runner.go:130] > # 	"KILL",
	I0919 17:00:31.117546   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117554   25636 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 17:00:31.117563   25636 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:00:31.117570   25636 command_runner.go:130] > # default_sysctls = [
	I0919 17:00:31.117573   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117579   25636 command_runner.go:130] > # List of devices on the host that a
	I0919 17:00:31.117586   25636 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 17:00:31.117592   25636 command_runner.go:130] > # allowed_devices = [
	I0919 17:00:31.117596   25636 command_runner.go:130] > # 	"/dev/fuse",
	I0919 17:00:31.117600   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117610   25636 command_runner.go:130] > # List of additional devices. specified as
	I0919 17:00:31.117619   25636 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 17:00:31.117627   25636 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 17:00:31.117650   25636 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:00:31.117657   25636 command_runner.go:130] > # additional_devices = [
	I0919 17:00:31.117660   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117667   25636 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 17:00:31.117673   25636 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 17:00:31.117677   25636 command_runner.go:130] > # 	"/etc/cdi",
	I0919 17:00:31.117681   25636 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 17:00:31.117687   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117694   25636 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 17:00:31.117702   25636 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 17:00:31.117708   25636 command_runner.go:130] > # Defaults to false.
	I0919 17:00:31.117713   25636 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 17:00:31.117722   25636 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 17:00:31.117730   25636 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 17:00:31.117736   25636 command_runner.go:130] > # hooks_dir = [
	I0919 17:00:31.117741   25636 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 17:00:31.117747   25636 command_runner.go:130] > # ]
	I0919 17:00:31.117754   25636 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 17:00:31.117762   25636 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 17:00:31.117769   25636 command_runner.go:130] > # its default mounts from the following two files:
	I0919 17:00:31.117775   25636 command_runner.go:130] > #
	I0919 17:00:31.117782   25636 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 17:00:31.117791   25636 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 17:00:31.117798   25636 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 17:00:31.117804   25636 command_runner.go:130] > #
	I0919 17:00:31.117810   25636 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 17:00:31.117819   25636 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 17:00:31.117827   25636 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 17:00:31.117835   25636 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 17:00:31.117841   25636 command_runner.go:130] > #
	I0919 17:00:31.117845   25636 command_runner.go:130] > # default_mounts_file = ""
	I0919 17:00:31.117853   25636 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 17:00:31.117860   25636 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 17:00:31.117867   25636 command_runner.go:130] > pids_limit = 1024
	I0919 17:00:31.117873   25636 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 17:00:31.117881   25636 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 17:00:31.117889   25636 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 17:00:31.117900   25636 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 17:00:31.117906   25636 command_runner.go:130] > # log_size_max = -1
	I0919 17:00:31.117913   25636 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0919 17:00:31.117919   25636 command_runner.go:130] > # log_to_journald = false
	I0919 17:00:31.117925   25636 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 17:00:31.117932   25636 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 17:00:31.117941   25636 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 17:00:31.117946   25636 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 17:00:31.117953   25636 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 17:00:31.117958   25636 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 17:00:31.117966   25636 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 17:00:31.117970   25636 command_runner.go:130] > # read_only = false
	I0919 17:00:31.117976   25636 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 17:00:31.117984   25636 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 17:00:31.117990   25636 command_runner.go:130] > # live configuration reload.
	I0919 17:00:31.117995   25636 command_runner.go:130] > # log_level = "info"
	I0919 17:00:31.118002   25636 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 17:00:31.118007   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:00:31.118013   25636 command_runner.go:130] > # log_filter = ""
	I0919 17:00:31.118019   25636 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 17:00:31.118027   25636 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 17:00:31.118031   25636 command_runner.go:130] > # separated by comma.
	I0919 17:00:31.118035   25636 command_runner.go:130] > # uid_mappings = ""
	I0919 17:00:31.118049   25636 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 17:00:31.118057   25636 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 17:00:31.118062   25636 command_runner.go:130] > # separated by comma.
	I0919 17:00:31.118068   25636 command_runner.go:130] > # gid_mappings = ""
	I0919 17:00:31.118074   25636 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 17:00:31.118082   25636 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:00:31.118088   25636 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:00:31.118094   25636 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 17:00:31.118100   25636 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 17:00:31.118112   25636 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:00:31.118120   25636 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:00:31.118124   25636 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 17:00:31.118131   25636 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 17:00:31.118137   25636 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 17:00:31.118145   25636 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 17:00:31.118151   25636 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 17:00:31.118159   25636 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 17:00:31.118167   25636 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 17:00:31.118173   25636 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 17:00:31.118181   25636 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 17:00:31.118186   25636 command_runner.go:130] > drop_infra_ctr = false
	I0919 17:00:31.118195   25636 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 17:00:31.118201   25636 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 17:00:31.118210   25636 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 17:00:31.118216   25636 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 17:00:31.118222   25636 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 17:00:31.118229   25636 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 17:00:31.118236   25636 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 17:00:31.118243   25636 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 17:00:31.118250   25636 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 17:00:31.118256   25636 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 17:00:31.118264   25636 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0919 17:00:31.118272   25636 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0919 17:00:31.118278   25636 command_runner.go:130] > # default_runtime = "runc"
	I0919 17:00:31.118284   25636 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 17:00:31.118293   25636 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 17:00:31.118304   25636 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0919 17:00:31.118311   25636 command_runner.go:130] > # creation as a file is not desired either.
	I0919 17:00:31.118320   25636 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 17:00:31.118327   25636 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 17:00:31.118334   25636 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 17:00:31.118337   25636 command_runner.go:130] > # ]
	I0919 17:00:31.118346   25636 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 17:00:31.118355   25636 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 17:00:31.118364   25636 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0919 17:00:31.118371   25636 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0919 17:00:31.118377   25636 command_runner.go:130] > #
	I0919 17:00:31.118382   25636 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0919 17:00:31.118389   25636 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0919 17:00:31.118396   25636 command_runner.go:130] > #  runtime_type = "oci"
	I0919 17:00:31.118400   25636 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0919 17:00:31.118407   25636 command_runner.go:130] > #  privileged_without_host_devices = false
	I0919 17:00:31.118412   25636 command_runner.go:130] > #  allowed_annotations = []
	I0919 17:00:31.118417   25636 command_runner.go:130] > # Where:
	I0919 17:00:31.118423   25636 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0919 17:00:31.118431   25636 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0919 17:00:31.118439   25636 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 17:00:31.118447   25636 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 17:00:31.118454   25636 command_runner.go:130] > #   in $PATH.
	I0919 17:00:31.118460   25636 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0919 17:00:31.118467   25636 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 17:00:31.118474   25636 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0919 17:00:31.118483   25636 command_runner.go:130] > #   state.
	I0919 17:00:31.118496   25636 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 17:00:31.118508   25636 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 17:00:31.118521   25636 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 17:00:31.118534   25636 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 17:00:31.118547   25636 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 17:00:31.118561   25636 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 17:00:31.118572   25636 command_runner.go:130] > #   The currently recognized values are:
	I0919 17:00:31.118585   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 17:00:31.118599   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 17:00:31.118612   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 17:00:31.118625   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 17:00:31.118640   25636 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 17:00:31.118654   25636 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 17:00:31.118667   25636 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 17:00:31.118680   25636 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0919 17:00:31.118688   25636 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 17:00:31.118693   25636 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 17:00:31.118699   25636 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 17:00:31.118704   25636 command_runner.go:130] > runtime_type = "oci"
	I0919 17:00:31.118711   25636 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 17:00:31.118715   25636 command_runner.go:130] > runtime_config_path = ""
	I0919 17:00:31.118722   25636 command_runner.go:130] > monitor_path = ""
	I0919 17:00:31.118726   25636 command_runner.go:130] > monitor_cgroup = ""
	I0919 17:00:31.118733   25636 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 17:00:31.118740   25636 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0919 17:00:31.118746   25636 command_runner.go:130] > # running containers
	I0919 17:00:31.118751   25636 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0919 17:00:31.118759   25636 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0919 17:00:31.118787   25636 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0919 17:00:31.118795   25636 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0919 17:00:31.118802   25636 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0919 17:00:31.118809   25636 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0919 17:00:31.118814   25636 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0919 17:00:31.118820   25636 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0919 17:00:31.118828   25636 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0919 17:00:31.118833   25636 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0919 17:00:31.118842   25636 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 17:00:31.118849   25636 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 17:00:31.118855   25636 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 17:00:31.118865   25636 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 17:00:31.118874   25636 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 17:00:31.118882   25636 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 17:00:31.118893   25636 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 17:00:31.118904   25636 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 17:00:31.118910   25636 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 17:00:31.118919   25636 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 17:00:31.118925   25636 command_runner.go:130] > # Example:
	I0919 17:00:31.118930   25636 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 17:00:31.118938   25636 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 17:00:31.118945   25636 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 17:00:31.118950   25636 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 17:00:31.118957   25636 command_runner.go:130] > # cpuset = 0
	I0919 17:00:31.118961   25636 command_runner.go:130] > # cpushares = "0-1"
	I0919 17:00:31.118967   25636 command_runner.go:130] > # Where:
	I0919 17:00:31.118972   25636 command_runner.go:130] > # The workload name is workload-type.
	I0919 17:00:31.118981   25636 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 17:00:31.118989   25636 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 17:00:31.118995   25636 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 17:00:31.119005   25636 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 17:00:31.119011   25636 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 17:00:31.119017   25636 command_runner.go:130] > # 
	I0919 17:00:31.119023   25636 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 17:00:31.119029   25636 command_runner.go:130] > #
	I0919 17:00:31.119035   25636 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 17:00:31.119043   25636 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 17:00:31.119054   25636 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 17:00:31.119061   25636 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 17:00:31.119069   25636 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 17:00:31.119074   25636 command_runner.go:130] > [crio.image]
	I0919 17:00:31.119082   25636 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 17:00:31.119087   25636 command_runner.go:130] > # default_transport = "docker://"
	I0919 17:00:31.119093   25636 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 17:00:31.119101   25636 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:00:31.119106   25636 command_runner.go:130] > # global_auth_file = ""
	I0919 17:00:31.119115   25636 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 17:00:31.119120   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:00:31.119127   25636 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0919 17:00:31.119135   25636 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 17:00:31.119143   25636 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:00:31.119148   25636 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:00:31.119155   25636 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 17:00:31.119161   25636 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 17:00:31.119169   25636 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 17:00:31.119178   25636 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 17:00:31.119186   25636 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 17:00:31.119192   25636 command_runner.go:130] > # pause_command = "/pause"
	I0919 17:00:31.119198   25636 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 17:00:31.119206   25636 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 17:00:31.119215   25636 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 17:00:31.119223   25636 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 17:00:31.119231   25636 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 17:00:31.119238   25636 command_runner.go:130] > # signature_policy = ""
	I0919 17:00:31.119244   25636 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 17:00:31.119253   25636 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 17:00:31.119257   25636 command_runner.go:130] > # changing them here.
	I0919 17:00:31.119263   25636 command_runner.go:130] > # insecure_registries = [
	I0919 17:00:31.119267   25636 command_runner.go:130] > # ]
	I0919 17:00:31.119276   25636 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 17:00:31.119285   25636 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 17:00:31.119289   25636 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 17:00:31.119295   25636 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 17:00:31.119301   25636 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 17:00:31.119307   25636 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 17:00:31.119313   25636 command_runner.go:130] > # CNI plugins.
	I0919 17:00:31.119317   25636 command_runner.go:130] > [crio.network]
	I0919 17:00:31.119325   25636 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 17:00:31.119333   25636 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 17:00:31.119338   25636 command_runner.go:130] > # cni_default_network = ""
	I0919 17:00:31.119344   25636 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 17:00:31.119351   25636 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 17:00:31.119356   25636 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 17:00:31.119362   25636 command_runner.go:130] > # plugin_dirs = [
	I0919 17:00:31.119367   25636 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 17:00:31.119372   25636 command_runner.go:130] > # ]
	I0919 17:00:31.119378   25636 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 17:00:31.119385   25636 command_runner.go:130] > [crio.metrics]
	I0919 17:00:31.119390   25636 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 17:00:31.119396   25636 command_runner.go:130] > enable_metrics = true
	I0919 17:00:31.119401   25636 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 17:00:31.119408   25636 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 17:00:31.119414   25636 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 17:00:31.119422   25636 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 17:00:31.119430   25636 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 17:00:31.119434   25636 command_runner.go:130] > # metrics_collectors = [
	I0919 17:00:31.119440   25636 command_runner.go:130] > # 	"operations",
	I0919 17:00:31.119445   25636 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 17:00:31.119453   25636 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 17:00:31.119459   25636 command_runner.go:130] > # 	"operations_errors",
	I0919 17:00:31.119464   25636 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 17:00:31.119471   25636 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 17:00:31.119476   25636 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 17:00:31.119482   25636 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 17:00:31.119487   25636 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 17:00:31.119492   25636 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 17:00:31.119496   25636 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 17:00:31.119502   25636 command_runner.go:130] > # 	"containers_oom_total",
	I0919 17:00:31.119506   25636 command_runner.go:130] > # 	"containers_oom",
	I0919 17:00:31.119513   25636 command_runner.go:130] > # 	"processes_defunct",
	I0919 17:00:31.119517   25636 command_runner.go:130] > # 	"operations_total",
	I0919 17:00:31.119524   25636 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 17:00:31.119528   25636 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 17:00:31.119535   25636 command_runner.go:130] > # 	"operations_errors_total",
	I0919 17:00:31.119539   25636 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 17:00:31.119546   25636 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 17:00:31.119550   25636 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 17:00:31.119557   25636 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 17:00:31.119562   25636 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 17:00:31.119568   25636 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 17:00:31.119572   25636 command_runner.go:130] > # ]
	I0919 17:00:31.119579   25636 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 17:00:31.119583   25636 command_runner.go:130] > # metrics_port = 9090
	I0919 17:00:31.119591   25636 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 17:00:31.119595   25636 command_runner.go:130] > # metrics_socket = ""
	I0919 17:00:31.119602   25636 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 17:00:31.119608   25636 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 17:00:31.119616   25636 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 17:00:31.119623   25636 command_runner.go:130] > # certificate on any modification event.
	I0919 17:00:31.119627   25636 command_runner.go:130] > # metrics_cert = ""
	I0919 17:00:31.119634   25636 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 17:00:31.119640   25636 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 17:00:31.119645   25636 command_runner.go:130] > # metrics_key = ""
	I0919 17:00:31.119653   25636 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 17:00:31.119658   25636 command_runner.go:130] > [crio.tracing]
	I0919 17:00:31.119667   25636 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 17:00:31.119677   25636 command_runner.go:130] > # enable_tracing = false
	I0919 17:00:31.119689   25636 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 17:00:31.119699   25636 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 17:00:31.119708   25636 command_runner.go:130] > # Number of samples to collect per million spans.
	I0919 17:00:31.119718   25636 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 17:00:31.119731   25636 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 17:00:31.119741   25636 command_runner.go:130] > [crio.stats]
	I0919 17:00:31.119753   25636 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 17:00:31.119765   25636 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 17:00:31.119775   25636 command_runner.go:130] > # stats_collection_period = 0
	I0919 17:00:31.119848   25636 cni.go:84] Creating CNI manager for ""
	I0919 17:00:31.119864   25636 cni.go:136] 2 nodes found, recommending kindnet
	I0919 17:00:31.119875   25636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:00:31.119898   25636 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553715 NodeName:multinode-553715-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:00:31.120022   25636 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553715-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:00:31.120076   25636 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553715-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:00:31.120124   25636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:00:31.129329   25636 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	I0919 17:00:31.129360   25636 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	
	Initiating transfer...
	I0919 17:00:31.129409   25636 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.2
	I0919 17:00:31.138502   25636 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256
	I0919 17:00:31.138532   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubectl -> /var/lib/minikube/binaries/v1.28.2/kubectl
	I0919 17:00:31.138532   25636 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubeadm
	I0919 17:00:31.138541   25636 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubelet
	I0919 17:00:31.138599   25636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl
	I0919 17:00:31.146152   25636 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I0919 17:00:31.146186   25636 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I0919 17:00:31.146206   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubectl --> /var/lib/minikube/binaries/v1.28.2/kubectl (49864704 bytes)
	I0919 17:00:32.224576   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubeadm -> /var/lib/minikube/binaries/v1.28.2/kubeadm
	I0919 17:00:32.224647   25636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm
	I0919 17:00:32.230373   25636 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I0919 17:00:32.230428   25636 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I0919 17:00:32.230461   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubeadm --> /var/lib/minikube/binaries/v1.28.2/kubeadm (50757632 bytes)
	I0919 17:00:32.724638   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:00:32.738215   25636 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubelet -> /var/lib/minikube/binaries/v1.28.2/kubelet
	I0919 17:00:32.738319   25636 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet
	I0919 17:00:32.743090   25636 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I0919 17:00:32.743163   25636 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I0919 17:00:32.743196   25636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubelet --> /var/lib/minikube/binaries/v1.28.2/kubelet (110776320 bytes)
	I0919 17:00:33.268564   25636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0919 17:00:33.277463   25636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0919 17:00:33.292159   25636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:00:33.309720   25636 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0919 17:00:33.313717   25636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:00:33.326836   25636 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:00:33.327107   25636 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:00:33.327236   25636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:00:33.327277   25636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:00:33.341730   25636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I0919 17:00:33.342096   25636 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:00:33.342501   25636 main.go:141] libmachine: Using API Version  1
	I0919 17:00:33.342524   25636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:00:33.342827   25636 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:00:33.343017   25636 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:00:33.343188   25636 start.go:304] JoinCluster: &{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:00:33.343284   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 17:00:33.343298   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:00:33.346287   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:00:33.346838   25636 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:00:33.346873   25636 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:00:33.347051   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:00:33.347217   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:00:33.347356   25636 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:00:33.347460   25636 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:00:33.526871   25636 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token sydx3t.wms3mlx0802mihgc --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:00:33.527115   25636 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:00:33.527144   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sydx3t.wms3mlx0802mihgc --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553715-m02"
	I0919 17:00:33.580486   25636 command_runner.go:130] > [preflight] Running pre-flight checks
	I0919 17:00:33.733289   25636 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0919 17:00:33.733320   25636 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0919 17:00:33.765605   25636 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:00:33.765764   25636 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:00:33.765785   25636 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 17:00:33.882889   25636 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0919 17:00:35.899566   25636 command_runner.go:130] > This node has joined the cluster:
	I0919 17:00:35.899595   25636 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0919 17:00:35.899601   25636 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0919 17:00:35.899607   25636 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0919 17:00:35.901796   25636 command_runner.go:130] ! W0919 17:00:33.558562     821 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0919 17:00:35.901823   25636 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:00:35.901846   25636 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token sydx3t.wms3mlx0802mihgc --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553715-m02": (2.37468178s)
	I0919 17:00:35.901865   25636 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 17:00:36.024485   25636 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0919 17:00:36.131306   25636 start.go:306] JoinCluster complete in 2.788107603s
	I0919 17:00:36.131339   25636 cni.go:84] Creating CNI manager for ""
	I0919 17:00:36.131345   25636 cni.go:136] 2 nodes found, recommending kindnet
	I0919 17:00:36.131401   25636 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 17:00:36.137491   25636 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 17:00:36.137518   25636 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 17:00:36.137529   25636 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 17:00:36.137538   25636 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:00:36.137547   25636 command_runner.go:130] > Access: 2023-09-19 16:59:10.727580671 +0000
	I0919 17:00:36.137559   25636 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 17:00:36.137579   25636 command_runner.go:130] > Change: 2023-09-19 16:59:08.979580671 +0000
	I0919 17:00:36.137589   25636 command_runner.go:130] >  Birth: -
	I0919 17:00:36.137848   25636 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 17:00:36.137863   25636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 17:00:36.157890   25636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 17:00:36.514493   25636 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:00:36.514517   25636 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:00:36.514523   25636 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0919 17:00:36.514528   25636 command_runner.go:130] > daemonset.apps/kindnet configured
	I0919 17:00:36.514852   25636 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:00:36.515053   25636 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:00:36.515314   25636 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 17:00:36.515324   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:36.515331   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:36.515337   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:36.517407   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:36.517422   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:36.517428   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:36.517434   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:36.517440   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:36.517450   25636 round_trippers.go:580]     Content-Length: 291
	I0919 17:00:36.517455   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:36 GMT
	I0919 17:00:36.517460   25636 round_trippers.go:580]     Audit-Id: 1707ee6f-3677-4cf6-add7-c230aff3e546
	I0919 17:00:36.517468   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:36.517488   25636 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"438","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0919 17:00:36.517564   25636 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553715" context rescaled to 1 replicas
	I0919 17:00:36.517589   25636 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:00:36.519518   25636 out.go:177] * Verifying Kubernetes components...
	I0919 17:00:36.520832   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:00:36.535701   25636 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:00:36.535899   25636 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:00:36.536116   25636 node_ready.go:35] waiting up to 6m0s for node "multinode-553715-m02" to be "Ready" ...
	I0919 17:00:36.536170   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:36.536178   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:36.536185   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:36.536191   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:36.538475   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:36.538497   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:36.538506   25636 round_trippers.go:580]     Audit-Id: 031c21e6-b33f-4a3a-bac2-c877c0b80965
	I0919 17:00:36.538514   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:36.538522   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:36.538530   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:36.538539   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:36.538548   25636 round_trippers.go:580]     Content-Length: 3530
	I0919 17:00:36.538559   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:36 GMT
	I0919 17:00:36.538692   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"487","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0919 17:00:36.538978   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:36.538991   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:36.538998   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:36.539003   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:36.541074   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:36.541095   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:36.541102   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:36.541107   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:36.541113   25636 round_trippers.go:580]     Content-Length: 3530
	I0919 17:00:36.541121   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:36 GMT
	I0919 17:00:36.541132   25636 round_trippers.go:580]     Audit-Id: e1f5f044-9818-4f6a-a319-1369738ed727
	I0919 17:00:36.541144   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:36.541163   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:36.541236   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"487","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0919 17:00:37.041883   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:37.041914   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:37.041926   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:37.041948   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:37.044742   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:37.044768   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:37.044778   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:37.044786   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:37.044795   25636 round_trippers.go:580]     Content-Length: 3530
	I0919 17:00:37.044802   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:37 GMT
	I0919 17:00:37.044810   25636 round_trippers.go:580]     Audit-Id: 2cefd3ce-42d6-4e8c-8907-54ae651608da
	I0919 17:00:37.044817   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:37.044825   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:37.044912   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"487","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0919 17:00:37.542538   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:37.542559   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:37.542569   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:37.542574   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:37.545326   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:37.545350   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:37.545360   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:37.545369   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:37.545377   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:37.545386   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:37.545394   25636 round_trippers.go:580]     Content-Length: 3530
	I0919 17:00:37.545402   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:37 GMT
	I0919 17:00:37.545411   25636 round_trippers.go:580]     Audit-Id: 103e724c-4ea1-4afa-8599-2b53038d0f4f
	I0919 17:00:37.545469   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"487","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0919 17:00:38.041727   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:38.041759   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:38.041767   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:38.041772   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:38.044451   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:38.044469   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:38.044477   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:38.044482   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:38.044487   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:38.044492   25636 round_trippers.go:580]     Content-Length: 3530
	I0919 17:00:38.044497   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:38 GMT
	I0919 17:00:38.044516   25636 round_trippers.go:580]     Audit-Id: dd9f92b3-4b71-453c-ab4f-b2d5dace2637
	I0919 17:00:38.044526   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:38.044603   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"487","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I0919 17:00:38.542512   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:38.542532   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:38.542540   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:38.542546   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:38.616295   25636 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0919 17:00:38.616318   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:38.616330   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:38.616338   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:38.616347   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:38 GMT
	I0919 17:00:38.616355   25636 round_trippers.go:580]     Audit-Id: 9b63b9a6-1189-45f8-bae1-5ba56ada230c
	I0919 17:00:38.616364   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:38.616372   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:38.616381   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:38.616488   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:38.616773   25636 node_ready.go:58] node "multinode-553715-m02" has status "Ready":"False"
	I0919 17:00:39.042409   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:39.042432   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:39.042440   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:39.042446   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:39.045854   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:39.045884   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:39.045895   25636 round_trippers.go:580]     Audit-Id: 41ad06f6-a1b5-444f-8fea-7d472a2219fe
	I0919 17:00:39.045907   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:39.045916   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:39.045927   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:39.045936   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:39.045946   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:39.045955   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:39 GMT
	I0919 17:00:39.046059   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:39.542599   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:39.542625   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:39.542635   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:39.542643   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:39.545196   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:39.545226   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:39.545236   25636 round_trippers.go:580]     Audit-Id: de10da81-600e-4193-ade0-c5af4acabdd5
	I0919 17:00:39.545245   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:39.545257   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:39.545266   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:39.545277   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:39.545288   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:39.545299   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:39 GMT
	I0919 17:00:39.545393   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:40.041741   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:40.041764   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:40.041772   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:40.041779   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:40.045339   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:40.045365   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:40.045375   25636 round_trippers.go:580]     Audit-Id: 750d7a0d-d774-412e-92eb-dda0ba4a715a
	I0919 17:00:40.045383   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:40.045392   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:40.045399   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:40.045408   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:40.045416   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:40.045426   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:40 GMT
	I0919 17:00:40.045533   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:40.542044   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:40.542073   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:40.542081   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:40.542095   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:40.544362   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:40.544386   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:40.544395   25636 round_trippers.go:580]     Audit-Id: 5b9a0ce9-16a0-4417-af03-7b6b0f167c75
	I0919 17:00:40.544412   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:40.544422   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:40.544435   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:40.544445   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:40.544452   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:40.544467   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:40 GMT
	I0919 17:00:40.544523   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:41.042102   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:41.042133   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:41.042148   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:41.042157   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:41.045338   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:41.045359   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:41.045368   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:41.045376   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:41.045384   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:41.045392   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:41 GMT
	I0919 17:00:41.045400   25636 round_trippers.go:580]     Audit-Id: 391b2d69-0859-4206-b1dc-81e87de467bf
	I0919 17:00:41.045407   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:41.045416   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:41.045495   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:41.045768   25636 node_ready.go:58] node "multinode-553715-m02" has status "Ready":"False"
	I0919 17:00:41.542040   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:41.542060   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:41.542068   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:41.542076   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:41.544973   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:41.544990   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:41.544997   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:41.545003   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:41.545008   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:41.545013   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:41.545018   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:41.545023   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:41 GMT
	I0919 17:00:41.545034   25636 round_trippers.go:580]     Audit-Id: 0e296889-1881-43fb-ac66-577a71e05e2e
	I0919 17:00:41.545116   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:42.041717   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:42.041740   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:42.041747   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:42.041753   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:42.044558   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:42.044578   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:42.044585   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:42.044591   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:42 GMT
	I0919 17:00:42.044596   25636 round_trippers.go:580]     Audit-Id: dd7f9289-1b11-46e9-b3fb-8fe0f8e89334
	I0919 17:00:42.044601   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:42.044606   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:42.044611   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:42.044616   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:42.044707   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:42.542324   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:42.542348   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:42.542356   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:42.542362   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:42.545105   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:42.545123   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:42.545129   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:42.545134   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:42.545139   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:42.545149   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:42 GMT
	I0919 17:00:42.545157   25636 round_trippers.go:580]     Audit-Id: f127e9ed-8cd3-43c5-8607-9392aacf4e59
	I0919 17:00:42.545162   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:42.545169   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:42.545299   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:43.042568   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:43.042593   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:43.042609   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:43.042618   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:43.045430   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:43.045450   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:43.045459   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:43 GMT
	I0919 17:00:43.045468   25636 round_trippers.go:580]     Audit-Id: 98492188-2d06-4e41-a9a4-b0845521a1fa
	I0919 17:00:43.045476   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:43.045484   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:43.045491   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:43.045500   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:43.045508   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:43.045584   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:43.045796   25636 node_ready.go:58] node "multinode-553715-m02" has status "Ready":"False"
	I0919 17:00:43.541786   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:43.541811   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:43.541822   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:43.541831   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:43.544533   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:43.544556   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:43.544566   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:43.544574   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:43.544582   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:43.544590   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:43 GMT
	I0919 17:00:43.544598   25636 round_trippers.go:580]     Audit-Id: 14d2730b-ae8d-437a-9e7a-34a89e2cc7e6
	I0919 17:00:43.544610   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:43.544621   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:43.544674   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:44.041973   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:44.041997   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:44.042008   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:44.042016   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:44.046004   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:44.046033   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:44.046044   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:44.046053   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:44.046062   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:44.046070   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:44.046078   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:44 GMT
	I0919 17:00:44.046090   25636 round_trippers.go:580]     Audit-Id: 468466c2-252a-42a3-b3cc-5d54dac2955d
	I0919 17:00:44.046099   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:44.046193   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:44.541679   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:44.541702   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:44.541710   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:44.541716   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:44.545686   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:44.545704   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:44.545714   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:44.545723   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:44.545731   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:44.545737   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:44.545742   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:44 GMT
	I0919 17:00:44.545747   25636 round_trippers.go:580]     Audit-Id: 6fdc3c04-a5c3-4c95-86aa-6e024d8c4ce0
	I0919 17:00:44.545751   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:44.545822   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:45.042415   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:45.042438   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:45.042446   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:45.042455   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:45.045814   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:45.045837   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:45.045884   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:45.045896   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:45 GMT
	I0919 17:00:45.045905   25636 round_trippers.go:580]     Audit-Id: d0901b27-1c2c-4257-a0a8-808391d348dc
	I0919 17:00:45.045911   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:45.045916   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:45.045921   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:45.045926   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:45.045999   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:45.046206   25636 node_ready.go:58] node "multinode-553715-m02" has status "Ready":"False"
	I0919 17:00:45.542559   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:45.542583   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:45.542595   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:45.542604   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:45.544671   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:45.544693   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:45.544703   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:45 GMT
	I0919 17:00:45.544712   25636 round_trippers.go:580]     Audit-Id: 3120a244-6e2a-4834-b3f2-f2c5146bc9aa
	I0919 17:00:45.544719   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:45.544727   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:45.544735   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:45.544749   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:45.544758   25636 round_trippers.go:580]     Content-Length: 3639
	I0919 17:00:45.544829   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"494","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I0919 17:00:46.042468   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:46.042491   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.042499   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.042505   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.045143   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.045167   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.045178   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.045186   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.045193   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.045200   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.045208   25636 round_trippers.go:580]     Content-Length: 3908
	I0919 17:00:46.045219   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.045231   25636 round_trippers.go:580]     Audit-Id: ab93a28e-c8d0-4ca7-9de4-bdfd1f449c1d
	I0919 17:00:46.045322   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"515","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2884 chars]
	I0919 17:00:46.541925   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:46.541952   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.541962   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.541970   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.545152   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:46.545168   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.545177   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.545186   25636 round_trippers.go:580]     Audit-Id: 99803b76-864b-4190-b010-91a2162b1dd3
	I0919 17:00:46.545198   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.545210   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.545222   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.545231   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.545244   25636 round_trippers.go:580]     Content-Length: 3725
	I0919 17:00:46.545290   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"518","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I0919 17:00:46.545509   25636 node_ready.go:49] node "multinode-553715-m02" has status "Ready":"True"
	I0919 17:00:46.545524   25636 node_ready.go:38] duration metric: took 10.009394992s waiting for node "multinode-553715-m02" to be "Ready" ...
	I0919 17:00:46.545534   25636 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:00:46.545586   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:00:46.545605   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.545617   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.545630   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.549165   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:46.549181   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.549188   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.549197   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.549205   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.549213   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.549220   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.549229   25636 round_trippers.go:580]     Audit-Id: 2daa9938-9ddf-491c-bc5c-d366fa54520d
	I0919 17:00:46.551005   25636 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"434","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67324 chars]
	I0919 17:00:46.553837   25636 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.553920   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:00:46.553941   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.553952   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.553962   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.556721   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.556742   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.556751   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.556757   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.556762   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.556767   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.556775   25636 round_trippers.go:580]     Audit-Id: 1220ba7d-15f0-4939-83eb-827a909f2bbf
	I0919 17:00:46.556782   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.557044   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"434","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0919 17:00:46.557462   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:46.557477   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.557487   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.557495   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.559700   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.559714   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.559720   25636 round_trippers.go:580]     Audit-Id: 398b996b-0217-45e1-a265-e00833e3a96e
	I0919 17:00:46.559727   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.559734   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.559742   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.559754   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.559764   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.560190   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:46.560466   25636 pod_ready.go:92] pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:46.560479   25636 pod_ready.go:81] duration metric: took 6.61653ms waiting for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.560489   25636 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.560529   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:00:46.560536   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.560542   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.560548   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.562571   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.562586   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.562595   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.562603   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.562612   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.562628   25636 round_trippers.go:580]     Audit-Id: 80d1f489-3cc3-4e50-b1e0-d9da91c039e7
	I0919 17:00:46.562637   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.562650   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.562880   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"310","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0919 17:00:46.563188   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:46.563197   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.563204   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.563212   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.565267   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.565281   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.565289   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.565298   25636 round_trippers.go:580]     Audit-Id: e1af735b-1788-4a43-b819-6a78b58522be
	I0919 17:00:46.565307   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.565316   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.565326   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.565336   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.565694   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:46.565997   25636 pod_ready.go:92] pod "etcd-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:46.566012   25636 pod_ready.go:81] duration metric: took 5.515281ms waiting for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.566029   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.566091   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553715
	I0919 17:00:46.566100   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.566109   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.566122   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.568019   25636 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:00:46.568032   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.568041   25636 round_trippers.go:580]     Audit-Id: c43932fc-1a29-4891-a112-bc0befa2decf
	I0919 17:00:46.568049   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.568058   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.568073   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.568082   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.568094   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.568255   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553715","namespace":"kube-system","uid":"e2712b6a-6771-4fb1-9b6d-e50e10e45411","resourceVersion":"308","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.mirror":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.seen":"2023-09-19T16:59:41.749099288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0919 17:00:46.568614   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:46.568626   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.568633   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.568640   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.570701   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.570714   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.570722   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.570731   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.570740   25636 round_trippers.go:580]     Audit-Id: a9e60ff3-0738-489c-977d-0bab91c35764
	I0919 17:00:46.570754   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.570763   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.570776   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.571001   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:46.571344   25636 pod_ready.go:92] pod "kube-apiserver-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:46.571360   25636 pod_ready.go:81] duration metric: took 5.317601ms waiting for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.571377   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.571433   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553715
	I0919 17:00:46.571443   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.571454   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.571467   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.573609   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:46.573627   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.573636   25636 round_trippers.go:580]     Audit-Id: 19cb32ba-2f4f-4f27-ae14-64168824533e
	I0919 17:00:46.573645   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.573657   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.573667   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.573680   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.573688   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.573920   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553715","namespace":"kube-system","uid":"56eb8685-d2ae-4f50-8da1-dca616585190","resourceVersion":"313","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.mirror":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.seen":"2023-09-19T16:59:41.749100351Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0919 17:00:46.574329   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:46.574344   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.574353   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.574359   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.577389   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:46.577403   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.577412   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.577420   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.577428   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.577438   25636 round_trippers.go:580]     Audit-Id: 3ed354d4-a99b-47be-8713-74d55c4d221b
	I0919 17:00:46.577450   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.577460   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.578064   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:46.578308   25636 pod_ready.go:92] pod "kube-controller-manager-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:46.578322   25636 pod_ready.go:81] duration metric: took 6.932357ms waiting for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.578333   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.742712   25636 request.go:629] Waited for 164.306612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:00:46.742774   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:00:46.742782   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.742793   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.742808   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.745903   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:46.745919   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.745926   25636 round_trippers.go:580]     Audit-Id: e8a3bff7-d65e-4d40-82e0-5402891411b6
	I0919 17:00:46.745931   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.745936   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.745941   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.745947   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.745955   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.746168   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"503","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0919 17:00:46.942918   25636 request.go:629] Waited for 196.366904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:46.942990   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:00:46.942996   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:46.943008   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:46.943023   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:46.947312   25636 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:00:46.947330   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:46.947341   25636 round_trippers.go:580]     Content-Length: 3725
	I0919 17:00:46.947349   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:46 GMT
	I0919 17:00:46.947357   25636 round_trippers.go:580]     Audit-Id: c1d0ba3e-0e62-4564-be4c-ceaf9723a6e2
	I0919 17:00:46.947368   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:46.947378   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:46.947388   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:46.947401   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:46.947526   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"518","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I0919 17:00:46.947750   25636 pod_ready.go:92] pod "kube-proxy-d5vl8" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:46.947763   25636 pod_ready.go:81] duration metric: took 369.424419ms waiting for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:46.947773   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:47.142102   25636 request.go:629] Waited for 194.276268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:00:47.142167   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:00:47.142172   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:47.142180   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:47.142189   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:47.145292   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:47.145311   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:47.145321   25636 round_trippers.go:580]     Audit-Id: 4af5e4b7-50e5-4514-b904-7e6533c56b9c
	I0919 17:00:47.145329   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:47.145337   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:47.145346   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:47.145355   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:47.145366   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:47 GMT
	I0919 17:00:47.145819   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvcz9","generateName":"kube-proxy-","namespace":"kube-system","uid":"377d6478-cda2-47b9-8af8-cff3064e8524","resourceVersion":"404","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0919 17:00:47.342543   25636 request.go:629] Waited for 196.348993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:47.342615   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:47.342620   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:47.342627   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:47.342634   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:47.345802   25636 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:00:47.345821   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:47.345830   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:47.345838   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:47.345846   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:47.345854   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:47.345866   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:47 GMT
	I0919 17:00:47.345876   25636 round_trippers.go:580]     Audit-Id: 42499bed-ab33-4cd5-ad21-ed056eb34fa9
	I0919 17:00:47.346334   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:47.346631   25636 pod_ready.go:92] pod "kube-proxy-tvcz9" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:47.346646   25636 pod_ready.go:81] duration metric: took 398.865146ms waiting for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:47.346666   25636 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:47.542030   25636 request.go:629] Waited for 195.300267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:00:47.542086   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:00:47.542091   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:47.542098   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:47.542104   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:47.544902   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:47.544922   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:47.544931   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:47.544937   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:47.544941   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:47.544947   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:47.544952   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:47 GMT
	I0919 17:00:47.544957   25636 round_trippers.go:580]     Audit-Id: 84836977-1b4e-4a49-9f43-2247724be2ac
	I0919 17:00:47.545098   25636 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553715","namespace":"kube-system","uid":"27c15070-fba4-4237-b6d2-4727af1e5809","resourceVersion":"389","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.mirror":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.seen":"2023-09-19T16:59:41.749088169Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0919 17:00:47.742784   25636 request.go:629] Waited for 197.336076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:47.742837   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:00:47.742844   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:47.742852   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:47.742858   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:47.745783   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:47.745799   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:47.745811   25636 round_trippers.go:580]     Audit-Id: 29947cfb-643e-4f80-aa22-a37bf944f628
	I0919 17:00:47.745820   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:47.745828   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:47.745838   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:47.745845   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:47.745855   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:47 GMT
	I0919 17:00:47.746193   25636 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I0919 17:00:47.746472   25636 pod_ready.go:92] pod "kube-scheduler-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:00:47.746485   25636 pod_ready.go:81] duration metric: took 399.807954ms waiting for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:00:47.746496   25636 pod_ready.go:38] duration metric: took 1.200949872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:00:47.746509   25636 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:00:47.746552   25636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:00:47.759977   25636 system_svc.go:56] duration metric: took 13.462564ms WaitForService to wait for kubelet.
	I0919 17:00:47.759994   25636 kubeadm.go:581] duration metric: took 11.242386711s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:00:47.760009   25636 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:00:47.942435   25636 request.go:629] Waited for 182.355423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I0919 17:00:47.942493   25636 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I0919 17:00:47.942501   25636 round_trippers.go:469] Request Headers:
	I0919 17:00:47.942512   25636 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:00:47.942523   25636 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:00:47.945407   25636 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:00:47.945428   25636 round_trippers.go:577] Response Headers:
	I0919 17:00:47.945435   25636 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:00:47.945441   25636 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:00:47.945450   25636 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:00:47 GMT
	I0919 17:00:47.945455   25636 round_trippers.go:580]     Audit-Id: e335d412-0ac9-4165-950e-8578b6caa1ea
	I0919 17:00:47.945460   25636 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:00:47.945465   25636 round_trippers.go:580]     Content-Type: application/json
	I0919 17:00:47.946160   25636 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"519"},"items":[{"metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"414","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9644 chars]
	I0919 17:00:47.946688   25636 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:00:47.946709   25636 node_conditions.go:123] node cpu capacity is 2
	I0919 17:00:47.946723   25636 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:00:47.946729   25636 node_conditions.go:123] node cpu capacity is 2
	I0919 17:00:47.946735   25636 node_conditions.go:105] duration metric: took 186.720574ms to run NodePressure ...
	I0919 17:00:47.946749   25636 start.go:228] waiting for startup goroutines ...
	I0919 17:00:47.946780   25636 start.go:242] writing updated cluster config ...
	I0919 17:00:47.947127   25636 ssh_runner.go:195] Run: rm -f paused
	I0919 17:00:47.994942   25636 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:00:47.997619   25636 out.go:177] * Done! kubectl is now configured to use "multinode-553715" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 16:59:09 UTC, ends at Tue 2023-09-19 17:00:56 UTC. --
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.169541464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142856169528514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=761d250d-cd5e-4ed8-a0bc-91a24c24276a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.170274490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8525307-1718-4564-b33e-885a1fd69191 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.170351437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8525307-1718-4564-b33e-885a1fd69191 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.171183500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd046e3b462b6a6a9a98829c240d8f879fd713d8a702b6663dcf6f13ac40b17a,PodSandboxId:ce9cc7fc2f378af11faf0ed00bc032a298480ab24ad649fb4e74449104667475,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695142852215010253,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbc7ae51cf19bb5409eba3cc7503db6d0f38a17a6409e5851853f67e8a55f13,PodSandboxId:d7e54387a8ffe4f188a6b3bee920e64cd1835b62789aebdee600cce6da238dce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695142800994025536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30af27c008da6888a599042702334eca383035700a6ee1f1a95bc613018e03d,PodSandboxId:6f67b0a2d51122353e7b68f563fd90d12ae11e17d03701c5b0a310ff81d7e5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142800741888218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16863947fee5eb5c97e1eca8198f991d58c3cf2f25f638f1e56e51c0dc79ca,PodSandboxId:e04cfef22e5cc6905862b211716511a5914f2b4958fbc1b5d4ac606fe4d62a6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695142798223449539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafec271adc2507432b2d6fd5938939e1cc62ffef2d76a8f58d5d91510b81887,PodSandboxId:65f892e00feee90aac86efbbe071ae1d3ae13d077d4fbbd5b56db2b83fb1e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695142796286313436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ddeae9d4c878be3425d7e5a32b00f99b0af6c24d7276f29f8c7c9e6010c895,PodSandboxId:622aee4bde6ee5266e9f0172bc2386be1999f870b8621c0c686dd45ad524b452,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695142774829387126,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5,PodSandboxId:09c687cebeb79e5bba2a7d586241856f1c63ba625c6992927002579061098059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695142774665542744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map
[string]string{io.kubernetes.container.hash: a39fc94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6adb8a44e159980181c237bde4598c9c52ff28f4cecde7017137e14e9637c35,PodSandboxId:867c1792fc9950900cc0423af62b75ff6cc5d7c81adeea66d62bd41f274e5623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695142774513353443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b67
5e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19a24f522a8d904a4605c54b476fe6d2a36579df8c26e74f85c34a89e7f4d1f,PodSandboxId:cb43ae771b505488d698bad655aacbec6f58581ea9a3ce2373daf4cee33f1291,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695142774313902446,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.
container.hash: 9a550da8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8525307-1718-4564-b33e-885a1fd69191 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.211338167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a6964fd2-fdd8-4b45-936b-d77c780aa0a2 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.211395696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a6964fd2-fdd8-4b45-936b-d77c780aa0a2 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.212779012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=be390e09-cef9-4d5b-a59a-74b790ca74e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.213218044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142856213202976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=be390e09-cef9-4d5b-a59a-74b790ca74e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.213993365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=541ec815-abe9-4b96-b74d-35f3d7108520 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.214049159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=541ec815-abe9-4b96-b74d-35f3d7108520 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.214250858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd046e3b462b6a6a9a98829c240d8f879fd713d8a702b6663dcf6f13ac40b17a,PodSandboxId:ce9cc7fc2f378af11faf0ed00bc032a298480ab24ad649fb4e74449104667475,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695142852215010253,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbc7ae51cf19bb5409eba3cc7503db6d0f38a17a6409e5851853f67e8a55f13,PodSandboxId:d7e54387a8ffe4f188a6b3bee920e64cd1835b62789aebdee600cce6da238dce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695142800994025536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30af27c008da6888a599042702334eca383035700a6ee1f1a95bc613018e03d,PodSandboxId:6f67b0a2d51122353e7b68f563fd90d12ae11e17d03701c5b0a310ff81d7e5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142800741888218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16863947fee5eb5c97e1eca8198f991d58c3cf2f25f638f1e56e51c0dc79ca,PodSandboxId:e04cfef22e5cc6905862b211716511a5914f2b4958fbc1b5d4ac606fe4d62a6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695142798223449539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafec271adc2507432b2d6fd5938939e1cc62ffef2d76a8f58d5d91510b81887,PodSandboxId:65f892e00feee90aac86efbbe071ae1d3ae13d077d4fbbd5b56db2b83fb1e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695142796286313436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ddeae9d4c878be3425d7e5a32b00f99b0af6c24d7276f29f8c7c9e6010c895,PodSandboxId:622aee4bde6ee5266e9f0172bc2386be1999f870b8621c0c686dd45ad524b452,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695142774829387126,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5,PodSandboxId:09c687cebeb79e5bba2a7d586241856f1c63ba625c6992927002579061098059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695142774665542744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map
[string]string{io.kubernetes.container.hash: a39fc94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6adb8a44e159980181c237bde4598c9c52ff28f4cecde7017137e14e9637c35,PodSandboxId:867c1792fc9950900cc0423af62b75ff6cc5d7c81adeea66d62bd41f274e5623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695142774513353443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b67
5e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19a24f522a8d904a4605c54b476fe6d2a36579df8c26e74f85c34a89e7f4d1f,PodSandboxId:cb43ae771b505488d698bad655aacbec6f58581ea9a3ce2373daf4cee33f1291,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695142774313902446,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.
container.hash: 9a550da8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=541ec815-abe9-4b96-b74d-35f3d7108520 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.256914608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eb76a88e-6362-42d3-9ee3-7b0198610b80 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.256972944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eb76a88e-6362-42d3-9ee3-7b0198610b80 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.257928576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=be68d343-9df1-4f09-b42b-113673154a77 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.258289400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142856258277527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=be68d343-9df1-4f09-b42b-113673154a77 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.259190588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=61d83356-f467-448e-aa05-bac3f64a02e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.259235811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=61d83356-f467-448e-aa05-bac3f64a02e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.259427302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd046e3b462b6a6a9a98829c240d8f879fd713d8a702b6663dcf6f13ac40b17a,PodSandboxId:ce9cc7fc2f378af11faf0ed00bc032a298480ab24ad649fb4e74449104667475,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695142852215010253,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbc7ae51cf19bb5409eba3cc7503db6d0f38a17a6409e5851853f67e8a55f13,PodSandboxId:d7e54387a8ffe4f188a6b3bee920e64cd1835b62789aebdee600cce6da238dce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695142800994025536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30af27c008da6888a599042702334eca383035700a6ee1f1a95bc613018e03d,PodSandboxId:6f67b0a2d51122353e7b68f563fd90d12ae11e17d03701c5b0a310ff81d7e5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142800741888218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16863947fee5eb5c97e1eca8198f991d58c3cf2f25f638f1e56e51c0dc79ca,PodSandboxId:e04cfef22e5cc6905862b211716511a5914f2b4958fbc1b5d4ac606fe4d62a6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695142798223449539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafec271adc2507432b2d6fd5938939e1cc62ffef2d76a8f58d5d91510b81887,PodSandboxId:65f892e00feee90aac86efbbe071ae1d3ae13d077d4fbbd5b56db2b83fb1e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695142796286313436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ddeae9d4c878be3425d7e5a32b00f99b0af6c24d7276f29f8c7c9e6010c895,PodSandboxId:622aee4bde6ee5266e9f0172bc2386be1999f870b8621c0c686dd45ad524b452,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695142774829387126,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5,PodSandboxId:09c687cebeb79e5bba2a7d586241856f1c63ba625c6992927002579061098059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695142774665542744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map
[string]string{io.kubernetes.container.hash: a39fc94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6adb8a44e159980181c237bde4598c9c52ff28f4cecde7017137e14e9637c35,PodSandboxId:867c1792fc9950900cc0423af62b75ff6cc5d7c81adeea66d62bd41f274e5623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695142774513353443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b67
5e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19a24f522a8d904a4605c54b476fe6d2a36579df8c26e74f85c34a89e7f4d1f,PodSandboxId:cb43ae771b505488d698bad655aacbec6f58581ea9a3ce2373daf4cee33f1291,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695142774313902446,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.
container.hash: 9a550da8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=61d83356-f467-448e-aa05-bac3f64a02e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.303038844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ae76d960-750b-4b51-a3bc-d16d0f1977d4 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.303095729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ae76d960-750b-4b51-a3bc-d16d0f1977d4 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.305136567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9ed8ea3d-7f2a-408e-b791-0e79d55b709f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.305555898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695142856305538638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9ed8ea3d-7f2a-408e-b791-0e79d55b709f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.306482409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bc7a52e6-2bed-476e-9f2f-03d43e746e85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.306526289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bc7a52e6-2bed-476e-9f2f-03d43e746e85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:00:56 multinode-553715 crio[719]: time="2023-09-19 17:00:56.306821828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bd046e3b462b6a6a9a98829c240d8f879fd713d8a702b6663dcf6f13ac40b17a,PodSandboxId:ce9cc7fc2f378af11faf0ed00bc032a298480ab24ad649fb4e74449104667475,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695142852215010253,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afbc7ae51cf19bb5409eba3cc7503db6d0f38a17a6409e5851853f67e8a55f13,PodSandboxId:d7e54387a8ffe4f188a6b3bee920e64cd1835b62789aebdee600cce6da238dce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695142800994025536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30af27c008da6888a599042702334eca383035700a6ee1f1a95bc613018e03d,PodSandboxId:6f67b0a2d51122353e7b68f563fd90d12ae11e17d03701c5b0a310ff81d7e5ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695142800741888218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16863947fee5eb5c97e1eca8198f991d58c3cf2f25f638f1e56e51c0dc79ca,PodSandboxId:e04cfef22e5cc6905862b211716511a5914f2b4958fbc1b5d4ac606fe4d62a6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695142798223449539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cafec271adc2507432b2d6fd5938939e1cc62ffef2d76a8f58d5d91510b81887,PodSandboxId:65f892e00feee90aac86efbbe071ae1d3ae13d077d4fbbd5b56db2b83fb1e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695142796286313436,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ddeae9d4c878be3425d7e5a32b00f99b0af6c24d7276f29f8c7c9e6010c895,PodSandboxId:622aee4bde6ee5266e9f0172bc2386be1999f870b8621c0c686dd45ad524b452,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695142774829387126,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5,PodSandboxId:09c687cebeb79e5bba2a7d586241856f1c63ba625c6992927002579061098059,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695142774665542744,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map
[string]string{io.kubernetes.container.hash: a39fc94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6adb8a44e159980181c237bde4598c9c52ff28f4cecde7017137e14e9637c35,PodSandboxId:867c1792fc9950900cc0423af62b75ff6cc5d7c81adeea66d62bd41f274e5623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695142774513353443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b67
5e1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c19a24f522a8d904a4605c54b476fe6d2a36579df8c26e74f85c34a89e7f4d1f,PodSandboxId:cb43ae771b505488d698bad655aacbec6f58581ea9a3ce2373daf4cee33f1291,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695142774313902446,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.
container.hash: 9a550da8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bc7a52e6-2bed-476e-9f2f-03d43e746e85 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bd046e3b462b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   ce9cc7fc2f378       busybox-5bc68d56bd-xj8tc
	afbc7ae51cf19       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      55 seconds ago       Running             coredns                   0                   d7e54387a8ffe       coredns-5dd5756b68-pffkm
	d30af27c008da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      55 seconds ago       Running             storage-provisioner       0                   6f67b0a2d5112       storage-provisioner
	8d16863947fee       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      58 seconds ago       Running             kindnet-cni               0                   e04cfef22e5cc       kindnet-lmmc5
	cafec271adc25       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      About a minute ago   Running             kube-proxy                0                   65f892e00feee       kube-proxy-tvcz9
	31ddeae9d4c87       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      About a minute ago   Running             kube-scheduler            0                   622aee4bde6ee       kube-scheduler-multinode-553715
	c39aa41434d37       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      About a minute ago   Running             kube-apiserver            0                   09c687cebeb79       kube-apiserver-multinode-553715
	d6adb8a44e159       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      About a minute ago   Running             kube-controller-manager   0                   867c1792fc995       kube-controller-manager-multinode-553715
	c19a24f522a8d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   cb43ae771b505       etcd-multinode-553715
	
	* 
	* ==> coredns [afbc7ae51cf19bb5409eba3cc7503db6d0f38a17a6409e5851853f67e8a55f13] <==
	* [INFO] 10.244.1.2:38353 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000221793s
	[INFO] 10.244.0.3:47392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127347s
	[INFO] 10.244.0.3:52811 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001707713s
	[INFO] 10.244.0.3:47404 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025086s
	[INFO] 10.244.0.3:46085 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000435231s
	[INFO] 10.244.0.3:37989 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001104563s
	[INFO] 10.244.0.3:49540 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042769s
	[INFO] 10.244.0.3:44139 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000033574s
	[INFO] 10.244.0.3:46905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000030227s
	[INFO] 10.244.1.2:35430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134809s
	[INFO] 10.244.1.2:56452 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241775s
	[INFO] 10.244.1.2:59683 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125832s
	[INFO] 10.244.1.2:40833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101475s
	[INFO] 10.244.0.3:41998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155503s
	[INFO] 10.244.0.3:37949 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129171s
	[INFO] 10.244.0.3:32798 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000044637s
	[INFO] 10.244.0.3:34765 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084301s
	[INFO] 10.244.1.2:54989 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209534s
	[INFO] 10.244.1.2:56062 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000207227s
	[INFO] 10.244.1.2:54988 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140385s
	[INFO] 10.244.1.2:51648 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169257s
	[INFO] 10.244.0.3:32904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016102s
	[INFO] 10.244.0.3:40421 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069464s
	[INFO] 10.244.0.3:46174 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059334s
	[INFO] 10.244.0.3:43135 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000061676s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-553715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=multinode-553715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T16_59_42_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:59:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553715
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:00:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:59:59 +0000   Tue, 19 Sep 2023 16:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:59:59 +0000   Tue, 19 Sep 2023 16:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:59:59 +0000   Tue, 19 Sep 2023 16:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:59:59 +0000   Tue, 19 Sep 2023 16:59:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-553715
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c97d65ac98704ad7a5677568b3778fc7
	  System UUID:                c97d65ac-9870-4ad7-a567-7568b3778fc7
	  Boot ID:                    0c16b162-567a-4972-93b8-0755c5fe111b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xj8tc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-pffkm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-multinode-553715                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kindnet-lmmc5                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-553715             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-multinode-553715    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-tvcz9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-553715             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 60s                kube-proxy       
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node multinode-553715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node multinode-553715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-553715 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node multinode-553715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node multinode-553715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet          Node multinode-553715 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           63s                node-controller  Node multinode-553715 event: Registered Node multinode-553715 in Controller
	  Normal  NodeReady                57s                kubelet          Node multinode-553715 status is now: NodeReady
	
	
	Name:               multinode-553715-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553715-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:00:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553715-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:00:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:00:46 +0000   Tue, 19 Sep 2023 17:00:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:00:46 +0000   Tue, 19 Sep 2023 17:00:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:00:46 +0000   Tue, 19 Sep 2023 17:00:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:00:46 +0000   Tue, 19 Sep 2023 17:00:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    multinode-553715-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0239eab67234c52995c51bf9e0aa8db
	  System UUID:                e0239eab-6723-4c52-995c-51bf9e0aa8db
	  Boot ID:                    eca459e4-d007-4468-8d0d-7543c98e0af9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-m9sw8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-ccllv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21s
	  kube-system                 kube-proxy-d5vl8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  21s (x5 over 22s)  kubelet          Node multinode-553715-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x5 over 22s)  kubelet          Node multinode-553715-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x5 over 22s)  kubelet          Node multinode-553715-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node multinode-553715-m02 event: Registered Node multinode-553715-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-553715-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep19 16:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072296] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.313589] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.344432] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142382] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.990709] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.750898] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.105385] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.139520] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.104131] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.196599] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +9.976894] systemd-fstab-generator[924]: Ignoring "noauto" for root device
	[  +8.785430] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[Sep19 17:00] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [c19a24f522a8d904a4605c54b476fe6d2a36579df8c26e74f85c34a89e7f4d1f] <==
	* {"level":"info","ts":"2023-09-19T16:59:36.434997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","added-peer-id":"38b26e584d45e0da","added-peer-peer-urls":["https://192.168.39.38:2380"]}
	{"level":"info","ts":"2023-09-19T16:59:36.437708Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T16:59:36.437843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2023-09-19T16:59:36.437994Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2023-09-19T16:59:36.442394Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T16:59:36.442323Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"38b26e584d45e0da","initial-advertise-peer-urls":["https://192.168.39.38:2380"],"listen-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T16:59:37.089717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-19T16:59:37.089827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T16:59:37.089874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 1"}
	{"level":"info","ts":"2023-09-19T16:59:37.089908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T16:59:37.089932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2023-09-19T16:59:37.089959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 2"}
	{"level":"info","ts":"2023-09-19T16:59:37.089985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2023-09-19T16:59:37.093002Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:multinode-553715 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T16:59:37.09306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:59:37.093651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:59:37.09413Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T16:59:37.094826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2023-09-19T16:59:37.094952Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:59:37.09798Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:59:37.098089Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:59:37.09811Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:59:37.099721Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T16:59:37.099759Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:00:39.277953Z","caller":"traceutil/trace.go:171","msg":"trace[1081616363] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"125.130503ms","start":"2023-09-19T17:00:39.152795Z","end":"2023-09-19T17:00:39.277926Z","steps":["trace[1081616363] 'process raft request'  (duration: 124.976551ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  17:00:56 up 1 min,  0 users,  load average: 0.90, 0.33, 0.12
	Linux multinode-553715 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [8d16863947fee5eb5c97e1eca8198f991d58c3cf2f25f638f1e56e51c0dc79ca] <==
	* I0919 16:59:59.170757       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0919 16:59:59.170929       1 main.go:107] hostIP = 192.168.39.38
	podIP = 192.168.39.38
	I0919 16:59:59.171303       1 main.go:116] setting mtu 1500 for CNI 
	I0919 16:59:59.171358       1 main.go:146] kindnetd IP family: "ipv4"
	I0919 16:59:59.171404       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0919 16:59:59.770870       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 16:59:59.770944       1 main.go:227] handling current node
	I0919 17:00:09.781469       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:00:09.781535       1 main.go:227] handling current node
	I0919 17:00:19.794908       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:00:19.794996       1 main.go:227] handling current node
	I0919 17:00:29.804868       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:00:29.804915       1 main.go:227] handling current node
	I0919 17:00:39.819157       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:00:39.819244       1 main.go:227] handling current node
	I0919 17:00:39.819286       1 main.go:223] Handling node with IPs: map[192.168.39.11:{}]
	I0919 17:00:39.819299       1 main.go:250] Node multinode-553715-m02 has CIDR [10.244.1.0/24] 
	I0919 17:00:39.819989       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.11 Flags: [] Table: 0} 
	I0919 17:00:49.834774       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:00:49.834816       1 main.go:227] handling current node
	I0919 17:00:49.834827       1 main.go:223] Handling node with IPs: map[192.168.39.11:{}]
	I0919 17:00:49.834833       1 main.go:250] Node multinode-553715-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5] <==
	* I0919 16:59:38.547953       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 16:59:38.547995       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 16:59:38.575125       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0919 16:59:38.582308       1 controller.go:624] quota admission added evaluator for: namespaces
	I0919 16:59:38.603293       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0919 16:59:38.603368       1 aggregator.go:166] initial CRD sync complete...
	I0919 16:59:38.603402       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 16:59:38.603424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 16:59:38.603447       1 cache.go:39] Caches are synced for autoregister controller
	I0919 16:59:38.629705       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 16:59:39.453169       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0919 16:59:39.457791       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 16:59:39.457833       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 16:59:40.101483       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 16:59:40.199189       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 16:59:40.308791       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 16:59:40.322419       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.38]
	I0919 16:59:40.323517       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 16:59:40.330687       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 16:59:40.534404       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 16:59:41.621059       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 16:59:41.654190       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 16:59:41.667379       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 16:59:53.997121       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0919 16:59:54.260943       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [d6adb8a44e159980181c237bde4598c9c52ff28f4cecde7017137e14e9637c35] <==
	* I0919 16:59:54.767831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.245µs"
	I0919 16:59:59.938168       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="393.744µs"
	I0919 16:59:59.968560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.318µs"
	I0919 17:00:01.988305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.8µs"
	I0919 17:00:02.048984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.872724ms"
	I0919 17:00:02.050991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="730.507µs"
	I0919 17:00:03.399381       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0919 17:00:35.758457       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-553715-m02\" does not exist"
	I0919 17:00:35.779463       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ccllv"
	I0919 17:00:35.779538       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d5vl8"
	I0919 17:00:35.787395       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-553715-m02" podCIDRs=["10.244.1.0/24"]
	I0919 17:00:38.406135       1 event.go:307] "Event occurred" object="multinode-553715-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-553715-m02 event: Registered Node multinode-553715-m02 in Controller"
	I0919 17:00:38.406355       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-553715-m02"
	I0919 17:00:46.176262       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553715-m02"
	I0919 17:00:48.672145       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0919 17:00:48.692506       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-m9sw8"
	I0919 17:00:48.699974       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-xj8tc"
	I0919 17:00:48.725169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.620805ms"
	I0919 17:00:48.737875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.529518ms"
	I0919 17:00:48.738187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="66.442µs"
	I0919 17:00:48.744317       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.504µs"
	I0919 17:00:52.579236       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.147498ms"
	I0919 17:00:52.579423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.024µs"
	I0919 17:00:53.163997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.562177ms"
	I0919 17:00:53.164304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.033µs"
	
	* 
	* ==> kube-proxy [cafec271adc2507432b2d6fd5938939e1cc62ffef2d76a8f58d5d91510b81887] <==
	* I0919 16:59:56.480974       1 server_others.go:69] "Using iptables proxy"
	I0919 16:59:56.490955       1 node.go:141] Successfully retrieved node IP: 192.168.39.38
	I0919 16:59:56.537819       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 16:59:56.537887       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 16:59:56.541219       1 server_others.go:152] "Using iptables Proxier"
	I0919 16:59:56.541289       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 16:59:56.541479       1 server.go:846] "Version info" version="v1.28.2"
	I0919 16:59:56.541519       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:59:56.542262       1 config.go:188] "Starting service config controller"
	I0919 16:59:56.542316       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 16:59:56.542346       1 config.go:97] "Starting endpoint slice config controller"
	I0919 16:59:56.542361       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 16:59:56.543054       1 config.go:315] "Starting node config controller"
	I0919 16:59:56.543090       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 16:59:56.643510       1 shared_informer.go:318] Caches are synced for node config
	I0919 16:59:56.643555       1 shared_informer.go:318] Caches are synced for service config
	I0919 16:59:56.643646       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [31ddeae9d4c878be3425d7e5a32b00f99b0af6c24d7276f29f8c7c9e6010c895] <==
	* W0919 16:59:39.482217       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 16:59:39.482318       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 16:59:39.503323       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:59:39.503695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 16:59:39.534889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 16:59:39.535005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 16:59:39.595101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 16:59:39.595391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 16:59:39.631033       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:59:39.631122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 16:59:39.679028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 16:59:39.679081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0919 16:59:39.721297       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 16:59:39.721349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 16:59:39.753539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 16:59:39.753671       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 16:59:39.757533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:59:39.757675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 16:59:39.824225       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 16:59:39.824314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 16:59:39.832130       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 16:59:39.832216       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 16:59:39.867970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:59:39.868032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0919 16:59:41.737178       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:59:09 UTC, ends at Tue 2023-09-19 17:00:56 UTC. --
	Sep 19 16:59:54 multinode-553715 kubelet[1261]: I0919 16:59:54.155295    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgnc9\" (UniqueName: \"kubernetes.io/projected/377d6478-cda2-47b9-8af8-cff3064e8524-kube-api-access-rgnc9\") pod \"kube-proxy-tvcz9\" (UID: \"377d6478-cda2-47b9-8af8-cff3064e8524\") " pod="kube-system/kube-proxy-tvcz9"
	Sep 19 16:59:54 multinode-553715 kubelet[1261]: I0919 16:59:54.155315    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2479ec2b-6cd3-4fb2-b85f-43b175cfbb79-cni-cfg\") pod \"kindnet-lmmc5\" (UID: \"2479ec2b-6cd3-4fb2-b85f-43b175cfbb79\") " pod="kube-system/kindnet-lmmc5"
	Sep 19 16:59:54 multinode-553715 kubelet[1261]: I0919 16:59:54.155336    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2479ec2b-6cd3-4fb2-b85f-43b175cfbb79-lib-modules\") pod \"kindnet-lmmc5\" (UID: \"2479ec2b-6cd3-4fb2-b85f-43b175cfbb79\") " pod="kube-system/kindnet-lmmc5"
	Sep 19 16:59:54 multinode-553715 kubelet[1261]: I0919 16:59:54.155354    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qvtk\" (UniqueName: \"kubernetes.io/projected/2479ec2b-6cd3-4fb2-b85f-43b175cfbb79-kube-api-access-5qvtk\") pod \"kindnet-lmmc5\" (UID: \"2479ec2b-6cd3-4fb2-b85f-43b175cfbb79\") " pod="kube-system/kindnet-lmmc5"
	Sep 19 16:59:54 multinode-553715 kubelet[1261]: I0919 16:59:54.155393    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/377d6478-cda2-47b9-8af8-cff3064e8524-xtables-lock\") pod \"kube-proxy-tvcz9\" (UID: \"377d6478-cda2-47b9-8af8-cff3064e8524\") " pod="kube-system/kube-proxy-tvcz9"
	Sep 19 16:59:54 multinode-553715 kubelet[1261]: I0919 16:59:54.155413    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2479ec2b-6cd3-4fb2-b85f-43b175cfbb79-xtables-lock\") pod \"kindnet-lmmc5\" (UID: \"2479ec2b-6cd3-4fb2-b85f-43b175cfbb79\") " pod="kube-system/kindnet-lmmc5"
	Sep 19 16:59:55 multinode-553715 kubelet[1261]: E0919 16:59:55.256794    1261 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Sep 19 16:59:55 multinode-553715 kubelet[1261]: E0919 16:59:55.257018    1261 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/377d6478-cda2-47b9-8af8-cff3064e8524-kube-proxy podName:377d6478-cda2-47b9-8af8-cff3064e8524 nodeName:}" failed. No retries permitted until 2023-09-19 16:59:55.756941403 +0000 UTC m=+14.157700259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/377d6478-cda2-47b9-8af8-cff3064e8524-kube-proxy") pod "kube-proxy-tvcz9" (UID: "377d6478-cda2-47b9-8af8-cff3064e8524") : failed to sync configmap cache: timed out waiting for the condition
	Sep 19 16:59:58 multinode-553715 kubelet[1261]: I0919 16:59:58.948762    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tvcz9" podStartSLOduration=4.948724573 podCreationTimestamp="2023-09-19 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:59:56.942778006 +0000 UTC m=+15.343536878" watchObservedRunningTime="2023-09-19 16:59:58.948724573 +0000 UTC m=+17.349483445"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.896905    1261 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.936840    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lmmc5" podStartSLOduration=5.9367955850000005 podCreationTimestamp="2023-09-19 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:59:58.949578725 +0000 UTC m=+17.350337597" watchObservedRunningTime="2023-09-19 16:59:59.936795585 +0000 UTC m=+18.337554457"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.937045    1261 topology_manager.go:215] "Topology Admit Handler" podUID="fbc226fb-43a9-4e0f-ac99-614f2740485d" podNamespace="kube-system" podName="coredns-5dd5756b68-pffkm"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.947534    1261 topology_manager.go:215] "Topology Admit Handler" podUID="6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8" podNamespace="kube-system" podName="storage-provisioner"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.998082    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbc226fb-43a9-4e0f-ac99-614f2740485d-config-volume\") pod \"coredns-5dd5756b68-pffkm\" (UID: \"fbc226fb-43a9-4e0f-ac99-614f2740485d\") " pod="kube-system/coredns-5dd5756b68-pffkm"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.998153    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8-tmp\") pod \"storage-provisioner\" (UID: \"6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8\") " pod="kube-system/storage-provisioner"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.998179    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ttrd\" (UniqueName: \"kubernetes.io/projected/fbc226fb-43a9-4e0f-ac99-614f2740485d-kube-api-access-6ttrd\") pod \"coredns-5dd5756b68-pffkm\" (UID: \"fbc226fb-43a9-4e0f-ac99-614f2740485d\") " pod="kube-system/coredns-5dd5756b68-pffkm"
	Sep 19 16:59:59 multinode-553715 kubelet[1261]: I0919 16:59:59.998199    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65gww\" (UniqueName: \"kubernetes.io/projected/6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8-kube-api-access-65gww\") pod \"storage-provisioner\" (UID: \"6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8\") " pod="kube-system/storage-provisioner"
	Sep 19 17:00:01 multinode-553715 kubelet[1261]: I0919 17:00:01.986191    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pffkm" podStartSLOduration=7.985926188 podCreationTimestamp="2023-09-19 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 17:00:01.983224825 +0000 UTC m=+20.383983698" watchObservedRunningTime="2023-09-19 17:00:01.985926188 +0000 UTC m=+20.386685060"
	Sep 19 17:00:01 multinode-553715 kubelet[1261]: I0919 17:00:01.987069    1261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.986681641 podCreationTimestamp="2023-09-19 16:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 17:00:00.964031085 +0000 UTC m=+19.364789958" watchObservedRunningTime="2023-09-19 17:00:01.986681641 +0000 UTC m=+20.387440514"
	Sep 19 17:00:41 multinode-553715 kubelet[1261]: E0919 17:00:41.890221    1261 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:00:41 multinode-553715 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:00:41 multinode-553715 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:00:41 multinode-553715 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:00:48 multinode-553715 kubelet[1261]: I0919 17:00:48.716371    1261 topology_manager.go:215] "Topology Admit Handler" podUID="b92501a7-dae6-46bb-afb7-2ea5795f162d" podNamespace="default" podName="busybox-5bc68d56bd-xj8tc"
	Sep 19 17:00:48 multinode-553715 kubelet[1261]: I0919 17:00:48.822301    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v97x7\" (UniqueName: \"kubernetes.io/projected/b92501a7-dae6-46bb-afb7-2ea5795f162d-kube-api-access-v97x7\") pod \"busybox-5bc68d56bd-xj8tc\" (UID: \"b92501a7-dae6-46bb-afb7-2ea5795f162d\") " pod="default/busybox-5bc68d56bd-xj8tc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-553715 -n multinode-553715
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-553715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (688.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553715
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-553715
E0919 17:02:56.283416   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:03:21.266890   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-553715: exit status 82 (2m1.608440218s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-553715"  ...
	* Stopping node "multinode-553715"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-553715" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553715 --wait=true -v=8 --alsologtostderr
E0919 17:04:44.307778   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:06:14.062281   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 17:07:56.282340   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:08:21.263878   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:09:19.331017   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:11:14.061351   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 17:12:37.106865   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 17:12:56.282435   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:13:21.266194   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553715 --wait=true -v=8 --alsologtostderr: (9m23.649212351s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553715
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553715 -n multinode-553715
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-553715 logs -n 25: (1.692869715s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m02:/home/docker/cp-test.txt                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile242511980/001/cp-test_multinode-553715-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m02:/home/docker/cp-test.txt                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715:/home/docker/cp-test_multinode-553715-m02_multinode-553715.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n multinode-553715 sudo cat                                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /home/docker/cp-test_multinode-553715-m02_multinode-553715.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m02:/home/docker/cp-test.txt                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03:/home/docker/cp-test_multinode-553715-m02_multinode-553715-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n multinode-553715-m03 sudo cat                                   | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /home/docker/cp-test_multinode-553715-m02_multinode-553715-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp testdata/cp-test.txt                                                | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile242511980/001/cp-test_multinode-553715-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715:/home/docker/cp-test_multinode-553715-m03_multinode-553715.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n multinode-553715 sudo cat                                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /home/docker/cp-test_multinode-553715-m03_multinode-553715.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt                       | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m02:/home/docker/cp-test_multinode-553715-m03_multinode-553715-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n multinode-553715-m02 sudo cat                                   | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /home/docker/cp-test_multinode-553715-m03_multinode-553715-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-553715 node stop m03                                                          | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	| node    | multinode-553715 node start                                                             | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:02 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-553715                                                                | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:02 UTC |                     |
	| stop    | -p multinode-553715                                                                     | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:02 UTC |                     |
	| start   | -p multinode-553715                                                                     | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:04 UTC | 19 Sep 23 17:13 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-553715                                                                | multinode-553715 | jenkins | v1.31.2 | 19 Sep 23 17:13 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:04:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:04:24.155601   28964 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:04:24.155827   28964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:04:24.155836   28964 out.go:309] Setting ErrFile to fd 2...
	I0919 17:04:24.155841   28964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:04:24.156030   28964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:04:24.156581   28964 out.go:303] Setting JSON to false
	I0919 17:04:24.157421   28964 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2814,"bootTime":1695140250,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:04:24.157474   28964 start.go:138] virtualization: kvm guest
	I0919 17:04:24.159701   28964 out.go:177] * [multinode-553715] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:04:24.161166   28964 notify.go:220] Checking for updates...
	I0919 17:04:24.161172   28964 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:04:24.162568   28964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:04:24.163879   28964 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:04:24.165352   28964 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:04:24.166697   28964 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:04:24.167929   28964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:04:24.169646   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:04:24.169745   28964 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:04:24.170205   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:04:24.170278   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:04:24.184549   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I0919 17:04:24.184871   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:04:24.185368   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:04:24.185388   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:04:24.185709   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:04:24.185859   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:04:24.220206   28964 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:04:24.221610   28964 start.go:298] selected driver: kvm2
	I0919 17:04:24.221623   28964 start.go:902] validating driver "kvm2" against &{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:04:24.221780   28964 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:04:24.222206   28964 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:04:24.222289   28964 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:04:24.236040   28964 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:04:24.236825   28964 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:04:24.236865   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:04:24.236877   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:04:24.236890   28964 start_flags.go:321] config:
	{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoP
auseInterval:1m0s}
	I0919 17:04:24.237126   28964 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:04:24.239542   28964 out.go:177] * Starting control plane node multinode-553715 in cluster multinode-553715
	I0919 17:04:24.240993   28964 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:04:24.241025   28964 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 17:04:24.241032   28964 cache.go:57] Caching tarball of preloaded images
	I0919 17:04:24.241105   28964 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:04:24.241116   28964 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:04:24.241221   28964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:04:24.241461   28964 start.go:365] acquiring machines lock for multinode-553715: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:04:24.241517   28964 start.go:369] acquired machines lock for "multinode-553715" in 30.294µs
	I0919 17:04:24.241533   28964 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:04:24.241544   28964 fix.go:54] fixHost starting: 
	I0919 17:04:24.241799   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:04:24.241829   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:04:24.254823   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0919 17:04:24.255172   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:04:24.255606   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:04:24.255628   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:04:24.255937   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:04:24.256115   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:04:24.256246   28964 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 17:04:24.257680   28964 fix.go:102] recreateIfNeeded on multinode-553715: state=Running err=<nil>
	W0919 17:04:24.257711   28964 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:04:24.259572   28964 out.go:177] * Updating the running kvm2 "multinode-553715" VM ...
	I0919 17:04:24.260900   28964 machine.go:88] provisioning docker machine ...
	I0919 17:04:24.260923   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:04:24.261158   28964 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 17:04:24.261378   28964 buildroot.go:166] provisioning hostname "multinode-553715"
	I0919 17:04:24.261398   28964 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 17:04:24.261561   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:04:24.264066   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:04:24.264517   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:04:24.264539   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:04:24.264711   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:04:24.264878   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:04:24.265027   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:04:24.265167   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:04:24.265348   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:04:24.265776   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 17:04:24.265795   28964 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553715 && echo "multinode-553715" | sudo tee /etc/hostname
	I0919 17:04:42.676737   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:04:48.756707   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:04:51.828696   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:04:57.908720   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:00.980652   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:07.060692   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:10.132726   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:16.213023   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:19.284693   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:25.364641   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:28.436662   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:34.516665   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:37.588650   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:43.668657   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:46.740655   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:52.820670   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:05:55.892670   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:01.972728   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:05.044617   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:11.124641   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:14.196655   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:20.276631   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:23.348642   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:29.428659   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:32.500670   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:38.580735   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:41.652757   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:47.732723   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:50.804649   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:56.884703   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:06:59.960670   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:06.036669   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:09.108719   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:15.188646   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:18.260742   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:24.340683   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:27.412708   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:33.492691   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:36.564734   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:42.644712   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:45.716647   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:51.796664   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:07:54.868699   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:00.948758   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:04.020689   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:10.100686   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:13.172695   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:19.252639   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:22.324659   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:28.404668   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:31.476654   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:37.556721   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:40.628714   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:46.708665   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:49.780700   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:55.860689   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:08:58.932724   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:09:05.012675   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:09:08.084711   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:09:14.164652   28964 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.38:22: connect: no route to host
	I0919 17:09:17.165622   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:09:17.165662   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:17.167495   28964 machine.go:91] provisioned docker machine in 4m52.906573159s
	I0919 17:09:17.167536   28964 fix.go:56] fixHost completed within 4m52.925993915s
	I0919 17:09:17.167544   28964 start.go:83] releasing machines lock for "multinode-553715", held for 4m52.92601578s
	W0919 17:09:17.167562   28964 start.go:688] error starting host: provision: host is not running
	W0919 17:09:17.167658   28964 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0919 17:09:17.167669   28964 start.go:703] Will try again in 5 seconds ...
	I0919 17:09:22.170487   28964 start.go:365] acquiring machines lock for multinode-553715: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:09:22.170590   28964 start.go:369] acquired machines lock for "multinode-553715" in 57.641µs
	I0919 17:09:22.170611   28964 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:09:22.170621   28964 fix.go:54] fixHost starting: 
	I0919 17:09:22.170898   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:09:22.170933   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:09:22.185365   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0919 17:09:22.185820   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:09:22.186246   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:09:22.186272   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:09:22.186593   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:09:22.186781   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:22.186950   28964 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 17:09:22.188831   28964 fix.go:102] recreateIfNeeded on multinode-553715: state=Stopped err=<nil>
	I0919 17:09:22.188860   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	W0919 17:09:22.189101   28964 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:09:22.192244   28964 out.go:177] * Restarting existing kvm2 VM for "multinode-553715" ...
	I0919 17:09:22.193667   28964 main.go:141] libmachine: (multinode-553715) Calling .Start
	I0919 17:09:22.193857   28964 main.go:141] libmachine: (multinode-553715) Ensuring networks are active...
	I0919 17:09:22.194692   28964 main.go:141] libmachine: (multinode-553715) Ensuring network default is active
	I0919 17:09:22.195087   28964 main.go:141] libmachine: (multinode-553715) Ensuring network mk-multinode-553715 is active
	I0919 17:09:22.195501   28964 main.go:141] libmachine: (multinode-553715) Getting domain xml...
	I0919 17:09:22.196203   28964 main.go:141] libmachine: (multinode-553715) Creating domain...
	I0919 17:09:23.445943   28964 main.go:141] libmachine: (multinode-553715) Waiting to get IP...
	I0919 17:09:23.446836   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:23.447322   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:23.447350   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:23.447274   29747 retry.go:31] will retry after 205.880606ms: waiting for machine to come up
	I0919 17:09:23.654795   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:23.655360   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:23.655394   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:23.655287   29747 retry.go:31] will retry after 297.64956ms: waiting for machine to come up
	I0919 17:09:23.954654   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:23.955183   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:23.955216   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:23.955131   29747 retry.go:31] will retry after 394.563186ms: waiting for machine to come up
	I0919 17:09:24.351648   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:24.352184   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:24.352217   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:24.352107   29747 retry.go:31] will retry after 607.05016ms: waiting for machine to come up
	I0919 17:09:24.960704   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:24.961155   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:24.961186   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:24.961112   29747 retry.go:31] will retry after 596.161049ms: waiting for machine to come up
	I0919 17:09:25.558669   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:25.559145   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:25.559192   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:25.559118   29747 retry.go:31] will retry after 629.732621ms: waiting for machine to come up
	I0919 17:09:26.190934   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:26.191436   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:26.191465   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:26.191381   29747 retry.go:31] will retry after 1.107100229s: waiting for machine to come up
	I0919 17:09:27.300254   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:27.300743   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:27.300787   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:27.300709   29747 retry.go:31] will retry after 1.000724521s: waiting for machine to come up
	I0919 17:09:28.302874   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:28.303365   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:28.303394   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:28.303331   29747 retry.go:31] will retry after 1.559071262s: waiting for machine to come up
	I0919 17:09:29.864941   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:29.865312   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:29.865341   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:29.865258   29747 retry.go:31] will retry after 1.598421381s: waiting for machine to come up
	I0919 17:09:31.466243   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:31.466644   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:31.466678   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:31.466605   29747 retry.go:31] will retry after 2.69947821s: waiting for machine to come up
	I0919 17:09:34.169049   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:34.169523   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:34.169550   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:34.169479   29747 retry.go:31] will retry after 2.656732213s: waiting for machine to come up
	I0919 17:09:36.828576   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:36.829025   28964 main.go:141] libmachine: (multinode-553715) DBG | unable to find current IP address of domain multinode-553715 in network mk-multinode-553715
	I0919 17:09:36.829045   28964 main.go:141] libmachine: (multinode-553715) DBG | I0919 17:09:36.828969   29747 retry.go:31] will retry after 3.947734904s: waiting for machine to come up
	I0919 17:09:40.777826   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.778258   28964 main.go:141] libmachine: (multinode-553715) Found IP for machine: 192.168.39.38
	I0919 17:09:40.778295   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has current primary IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.778306   28964 main.go:141] libmachine: (multinode-553715) Reserving static IP address...
	I0919 17:09:40.778680   28964 main.go:141] libmachine: (multinode-553715) Reserved static IP address: 192.168.39.38
	I0919 17:09:40.778714   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "multinode-553715", mac: "52:54:00:01:c6:86", ip: "192.168.39.38"} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:40.778737   28964 main.go:141] libmachine: (multinode-553715) Waiting for SSH to be available...
	I0919 17:09:40.778774   28964 main.go:141] libmachine: (multinode-553715) DBG | skip adding static IP to network mk-multinode-553715 - found existing host DHCP lease matching {name: "multinode-553715", mac: "52:54:00:01:c6:86", ip: "192.168.39.38"}
	I0919 17:09:40.778801   28964 main.go:141] libmachine: (multinode-553715) DBG | Getting to WaitForSSH function...
	I0919 17:09:40.780838   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.781169   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:40.781215   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.781314   28964 main.go:141] libmachine: (multinode-553715) DBG | Using SSH client type: external
	I0919 17:09:40.781341   28964 main.go:141] libmachine: (multinode-553715) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa (-rw-------)
	I0919 17:09:40.781383   28964 main.go:141] libmachine: (multinode-553715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:09:40.781403   28964 main.go:141] libmachine: (multinode-553715) DBG | About to run SSH command:
	I0919 17:09:40.781413   28964 main.go:141] libmachine: (multinode-553715) DBG | exit 0
	I0919 17:09:40.876619   28964 main.go:141] libmachine: (multinode-553715) DBG | SSH cmd err, output: <nil>: 
	I0919 17:09:40.876999   28964 main.go:141] libmachine: (multinode-553715) Calling .GetConfigRaw
	I0919 17:09:40.877572   28964 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 17:09:40.880030   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.880440   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:40.880470   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.880761   28964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:09:40.880944   28964 machine.go:88] provisioning docker machine ...
	I0919 17:09:40.880963   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:40.881183   28964 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 17:09:40.881311   28964 buildroot.go:166] provisioning hostname "multinode-553715"
	I0919 17:09:40.881328   28964 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 17:09:40.881476   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:40.883445   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.883818   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:40.883846   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:40.883965   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:40.884162   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:40.884303   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:40.884435   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:40.884562   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:09:40.884863   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 17:09:40.884875   28964 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553715 && echo "multinode-553715" | sudo tee /etc/hostname
	I0919 17:09:41.029880   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553715
	
	I0919 17:09:41.029909   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:41.032609   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.032944   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.032983   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.033128   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:41.033299   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.033443   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.033558   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:41.033702   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:09:41.034004   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 17:09:41.034021   28964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553715/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:09:41.177077   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:09:41.177174   28964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:09:41.177209   28964 buildroot.go:174] setting up certificates
	I0919 17:09:41.177229   28964 provision.go:83] configureAuth start
	I0919 17:09:41.177244   28964 main.go:141] libmachine: (multinode-553715) Calling .GetMachineName
	I0919 17:09:41.177485   28964 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 17:09:41.180123   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.180453   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.180484   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.180612   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:41.182572   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.182905   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.182933   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.183062   28964 provision.go:138] copyHostCerts
	I0919 17:09:41.183102   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:09:41.183150   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:09:41.183163   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:09:41.183243   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:09:41.183318   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:09:41.183335   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:09:41.183342   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:09:41.183366   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:09:41.183407   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:09:41.183421   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:09:41.183427   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:09:41.183446   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:09:41.183491   28964 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.multinode-553715 san=[192.168.39.38 192.168.39.38 localhost 127.0.0.1 minikube multinode-553715]
	I0919 17:09:41.281500   28964 provision.go:172] copyRemoteCerts
	I0919 17:09:41.281551   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:09:41.281596   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:41.284346   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.284769   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.284810   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.284947   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:41.285166   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.285327   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:41.285473   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:09:41.382556   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 17:09:41.382638   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:09:41.410302   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 17:09:41.410371   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 17:09:41.434691   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 17:09:41.434759   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 17:09:41.457660   28964 provision.go:86] duration metric: configureAuth took 280.415019ms
	I0919 17:09:41.457689   28964 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:09:41.457949   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:09:41.458019   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:41.461156   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.461556   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.461602   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.461754   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:41.461950   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.462115   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.462271   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:41.462437   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:09:41.462779   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 17:09:41.462795   28964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:09:41.777352   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:09:41.777376   28964 machine.go:91] provisioned docker machine in 896.419179ms
	I0919 17:09:41.777388   28964 start.go:300] post-start starting for "multinode-553715" (driver="kvm2")
	I0919 17:09:41.777405   28964 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:09:41.777430   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:41.777755   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:09:41.777783   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:41.780196   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.780674   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.780702   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.780869   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:41.781035   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.781163   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:41.781328   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:09:41.878034   28964 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:09:41.882132   28964 command_runner.go:130] > NAME=Buildroot
	I0919 17:09:41.882154   28964 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 17:09:41.882160   28964 command_runner.go:130] > ID=buildroot
	I0919 17:09:41.882168   28964 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 17:09:41.882176   28964 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 17:09:41.882207   28964 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:09:41.882225   28964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:09:41.882292   28964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:09:41.882363   28964 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:09:41.882372   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /etc/ssl/certs/132392.pem
	I0919 17:09:41.882463   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:09:41.890646   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:09:41.913224   28964 start.go:303] post-start completed in 135.802634ms
	I0919 17:09:41.913248   28964 fix.go:56] fixHost completed within 19.742626246s
	I0919 17:09:41.913271   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:41.915681   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.916044   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:41.916075   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:41.916253   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:41.916447   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.916586   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:41.916714   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:41.916849   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:09:41.917142   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.38 22 <nil> <nil>}
	I0919 17:09:41.917154   28964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:09:42.049019   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695143381.996782829
	
	I0919 17:09:42.049037   28964 fix.go:206] guest clock: 1695143381.996782829
	I0919 17:09:42.049043   28964 fix.go:219] Guest: 2023-09-19 17:09:41.996782829 +0000 UTC Remote: 2023-09-19 17:09:41.913252353 +0000 UTC m=+317.788320830 (delta=83.530476ms)
	I0919 17:09:42.049059   28964 fix.go:190] guest clock delta is within tolerance: 83.530476ms
	I0919 17:09:42.049069   28964 start.go:83] releasing machines lock for "multinode-553715", held for 19.878465065s
	I0919 17:09:42.049085   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:42.049355   28964 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 17:09:42.051702   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:42.051957   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:42.052007   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:42.052109   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:42.052592   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:42.052808   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:09:42.052909   28964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:09:42.052951   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:42.053054   28964 ssh_runner.go:195] Run: cat /version.json
	I0919 17:09:42.053094   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:09:42.055789   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:42.055954   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:42.056196   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:42.056223   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:42.056377   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:42.056554   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:42.056579   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:42.056580   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:42.056694   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:09:42.056775   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:42.056835   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:09:42.056896   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:09:42.056953   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:09:42.057149   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:09:42.169524   28964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 17:09:42.169586   28964 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I0919 17:09:42.169702   28964 ssh_runner.go:195] Run: systemctl --version
	I0919 17:09:42.175212   28964 command_runner.go:130] > systemd 247 (247)
	I0919 17:09:42.175258   28964 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0919 17:09:42.175315   28964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:09:42.315964   28964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 17:09:42.322375   28964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 17:09:42.322522   28964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:09:42.322594   28964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:09:42.336797   28964 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0919 17:09:42.336824   28964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:09:42.336832   28964 start.go:469] detecting cgroup driver to use...
	I0919 17:09:42.336904   28964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:09:42.354720   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:09:42.367530   28964 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:09:42.367580   28964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:09:42.380474   28964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:09:42.393789   28964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:09:42.502647   28964 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0919 17:09:42.502735   28964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:09:42.623366   28964 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0919 17:09:42.623454   28964 docker.go:212] disabling docker service ...
	I0919 17:09:42.623516   28964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:09:42.637053   28964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:09:42.648352   28964 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0919 17:09:42.648534   28964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:09:42.766276   28964 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0919 17:09:42.766359   28964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:09:42.779321   28964 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0919 17:09:42.779692   28964 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0919 17:09:42.879977   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:09:42.891449   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:09:42.908533   28964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 17:09:42.908993   28964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 17:09:42.909046   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:09:42.918183   28964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:09:42.918253   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:09:42.927275   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:09:42.936000   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:09:42.944758   28964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:09:42.953892   28964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:09:42.961816   28964 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:09:42.961864   28964 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:09:42.961915   28964 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:09:42.974621   28964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:09:42.982901   28964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:09:43.088647   28964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:09:43.257475   28964 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:09:43.257557   28964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:09:43.262156   28964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 17:09:43.262179   28964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 17:09:43.262187   28964 command_runner.go:130] > Device: 16h/22d	Inode: 756         Links: 1
	I0919 17:09:43.262199   28964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:09:43.262208   28964 command_runner.go:130] > Access: 2023-09-19 17:09:43.190363855 +0000
	I0919 17:09:43.262222   28964 command_runner.go:130] > Modify: 2023-09-19 17:09:43.190363855 +0000
	I0919 17:09:43.262234   28964 command_runner.go:130] > Change: 2023-09-19 17:09:43.190363855 +0000
	I0919 17:09:43.262245   28964 command_runner.go:130] >  Birth: -
	I0919 17:09:43.262266   28964 start.go:537] Will wait 60s for crictl version
	I0919 17:09:43.262310   28964 ssh_runner.go:195] Run: which crictl
	I0919 17:09:43.265678   28964 command_runner.go:130] > /usr/bin/crictl
	I0919 17:09:43.265849   28964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:09:43.312398   28964 command_runner.go:130] > Version:  0.1.0
	I0919 17:09:43.312451   28964 command_runner.go:130] > RuntimeName:  cri-o
	I0919 17:09:43.312458   28964 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0919 17:09:43.312467   28964 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 17:09:43.312487   28964 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:09:43.312551   28964 ssh_runner.go:195] Run: crio --version
	I0919 17:09:43.361948   28964 command_runner.go:130] > crio version 1.24.1
	I0919 17:09:43.361966   28964 command_runner.go:130] > Version:          1.24.1
	I0919 17:09:43.361973   28964 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:09:43.361977   28964 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:09:43.361983   28964 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:09:43.361987   28964 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:09:43.361991   28964 command_runner.go:130] > Compiler:         gc
	I0919 17:09:43.361996   28964 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:09:43.362007   28964 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:09:43.362016   28964 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:09:43.362027   28964 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:09:43.362038   28964 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:09:43.363262   28964 ssh_runner.go:195] Run: crio --version
	I0919 17:09:43.403457   28964 command_runner.go:130] > crio version 1.24.1
	I0919 17:09:43.403474   28964 command_runner.go:130] > Version:          1.24.1
	I0919 17:09:43.403481   28964 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:09:43.403485   28964 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:09:43.403501   28964 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:09:43.403506   28964 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:09:43.403510   28964 command_runner.go:130] > Compiler:         gc
	I0919 17:09:43.403514   28964 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:09:43.403519   28964 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:09:43.403526   28964 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:09:43.403530   28964 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:09:43.403534   28964 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:09:43.407108   28964 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 17:09:43.408588   28964 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 17:09:43.411323   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:43.411564   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:09:43.411593   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:09:43.411719   28964 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 17:09:43.415680   28964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:09:43.427703   28964 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:09:43.427777   28964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:09:43.464149   28964 command_runner.go:130] > {
	I0919 17:09:43.464166   28964 command_runner.go:130] >   "images": [
	I0919 17:09:43.464170   28964 command_runner.go:130] >     {
	I0919 17:09:43.464178   28964 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0919 17:09:43.464182   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:43.464188   28964 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0919 17:09:43.464192   28964 command_runner.go:130] >       ],
	I0919 17:09:43.464196   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:43.464208   28964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0919 17:09:43.464222   28964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0919 17:09:43.464232   28964 command_runner.go:130] >       ],
	I0919 17:09:43.464240   28964 command_runner.go:130] >       "size": "750414",
	I0919 17:09:43.464250   28964 command_runner.go:130] >       "uid": {
	I0919 17:09:43.464257   28964 command_runner.go:130] >         "value": "65535"
	I0919 17:09:43.464271   28964 command_runner.go:130] >       },
	I0919 17:09:43.464281   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:43.464293   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:43.464305   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:43.464314   28964 command_runner.go:130] >     }
	I0919 17:09:43.464322   28964 command_runner.go:130] >   ]
	I0919 17:09:43.464329   28964 command_runner.go:130] > }
	I0919 17:09:43.464471   28964 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I0919 17:09:43.464528   28964 ssh_runner.go:195] Run: which lz4
	I0919 17:09:43.468068   28964 command_runner.go:130] > /usr/bin/lz4
	I0919 17:09:43.468169   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 17:09:43.468260   28964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:09:43.471951   28964 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:09:43.472164   28964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:09:43.472191   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I0919 17:09:45.162262   28964 crio.go:444] Took 1.694031 seconds to copy over tarball
	I0919 17:09:45.162326   28964 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:09:47.967796   28964 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.805448363s)
	I0919 17:09:47.967823   28964 crio.go:451] Took 2.805535 seconds to extract the tarball
	I0919 17:09:47.967832   28964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:09:48.008258   28964 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:09:48.055927   28964 command_runner.go:130] > {
	I0919 17:09:48.055945   28964 command_runner.go:130] >   "images": [
	I0919 17:09:48.055949   28964 command_runner.go:130] >     {
	I0919 17:09:48.055956   28964 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0919 17:09:48.055961   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.055966   28964 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0919 17:09:48.055970   28964 command_runner.go:130] >       ],
	I0919 17:09:48.055974   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.055990   28964 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0919 17:09:48.056002   28964 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0919 17:09:48.056007   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056015   28964 command_runner.go:130] >       "size": "65258016",
	I0919 17:09:48.056021   28964 command_runner.go:130] >       "uid": null,
	I0919 17:09:48.056027   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056044   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056053   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056057   28964 command_runner.go:130] >     },
	I0919 17:09:48.056069   28964 command_runner.go:130] >     {
	I0919 17:09:48.056078   28964 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0919 17:09:48.056082   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056091   28964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0919 17:09:48.056097   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056108   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056126   28964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0919 17:09:48.056138   28964 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0919 17:09:48.056142   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056151   28964 command_runner.go:130] >       "size": "31470524",
	I0919 17:09:48.056158   28964 command_runner.go:130] >       "uid": null,
	I0919 17:09:48.056162   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056166   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056173   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056177   28964 command_runner.go:130] >     },
	I0919 17:09:48.056180   28964 command_runner.go:130] >     {
	I0919 17:09:48.056188   28964 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0919 17:09:48.056195   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056205   28964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0919 17:09:48.056212   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056217   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056224   28964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0919 17:09:48.056234   28964 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0919 17:09:48.056239   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056245   28964 command_runner.go:130] >       "size": "53621675",
	I0919 17:09:48.056249   28964 command_runner.go:130] >       "uid": null,
	I0919 17:09:48.056253   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056261   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056265   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056271   28964 command_runner.go:130] >     },
	I0919 17:09:48.056275   28964 command_runner.go:130] >     {
	I0919 17:09:48.056281   28964 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0919 17:09:48.056288   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056293   28964 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0919 17:09:48.056299   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056303   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056315   28964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0919 17:09:48.056324   28964 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0919 17:09:48.056336   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056343   28964 command_runner.go:130] >       "size": "295456551",
	I0919 17:09:48.056347   28964 command_runner.go:130] >       "uid": {
	I0919 17:09:48.056352   28964 command_runner.go:130] >         "value": "0"
	I0919 17:09:48.056356   28964 command_runner.go:130] >       },
	I0919 17:09:48.056363   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056367   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056372   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056378   28964 command_runner.go:130] >     },
	I0919 17:09:48.056381   28964 command_runner.go:130] >     {
	I0919 17:09:48.056390   28964 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I0919 17:09:48.056394   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056399   28964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I0919 17:09:48.056415   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056419   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056427   28964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I0919 17:09:48.056439   28964 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I0919 17:09:48.056449   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056457   28964 command_runner.go:130] >       "size": "127149008",
	I0919 17:09:48.056461   28964 command_runner.go:130] >       "uid": {
	I0919 17:09:48.056466   28964 command_runner.go:130] >         "value": "0"
	I0919 17:09:48.056470   28964 command_runner.go:130] >       },
	I0919 17:09:48.056476   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056480   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056487   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056491   28964 command_runner.go:130] >     },
	I0919 17:09:48.056497   28964 command_runner.go:130] >     {
	I0919 17:09:48.056503   28964 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I0919 17:09:48.056508   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056513   28964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I0919 17:09:48.056519   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056524   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056534   28964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I0919 17:09:48.056541   28964 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I0919 17:09:48.056550   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056555   28964 command_runner.go:130] >       "size": "123171638",
	I0919 17:09:48.056561   28964 command_runner.go:130] >       "uid": {
	I0919 17:09:48.056565   28964 command_runner.go:130] >         "value": "0"
	I0919 17:09:48.056571   28964 command_runner.go:130] >       },
	I0919 17:09:48.056575   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056580   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056584   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056589   28964 command_runner.go:130] >     },
	I0919 17:09:48.056593   28964 command_runner.go:130] >     {
	I0919 17:09:48.056601   28964 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I0919 17:09:48.056605   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056611   28964 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I0919 17:09:48.056614   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056619   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056626   28964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I0919 17:09:48.056635   28964 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I0919 17:09:48.056641   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056647   28964 command_runner.go:130] >       "size": "74687895",
	I0919 17:09:48.056660   28964 command_runner.go:130] >       "uid": null,
	I0919 17:09:48.056664   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056668   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056673   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056678   28964 command_runner.go:130] >     },
	I0919 17:09:48.056682   28964 command_runner.go:130] >     {
	I0919 17:09:48.056691   28964 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I0919 17:09:48.056695   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056703   28964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I0919 17:09:48.056707   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056713   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056734   28964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I0919 17:09:48.056746   28964 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I0919 17:09:48.056750   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056754   28964 command_runner.go:130] >       "size": "61485878",
	I0919 17:09:48.056757   28964 command_runner.go:130] >       "uid": {
	I0919 17:09:48.056761   28964 command_runner.go:130] >         "value": "0"
	I0919 17:09:48.056770   28964 command_runner.go:130] >       },
	I0919 17:09:48.056775   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056781   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056786   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056793   28964 command_runner.go:130] >     },
	I0919 17:09:48.056796   28964 command_runner.go:130] >     {
	I0919 17:09:48.056802   28964 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0919 17:09:48.056811   28964 command_runner.go:130] >       "repoTags": [
	I0919 17:09:48.056816   28964 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0919 17:09:48.056822   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056826   28964 command_runner.go:130] >       "repoDigests": [
	I0919 17:09:48.056835   28964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0919 17:09:48.056845   28964 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0919 17:09:48.056848   28964 command_runner.go:130] >       ],
	I0919 17:09:48.056853   28964 command_runner.go:130] >       "size": "750414",
	I0919 17:09:48.056858   28964 command_runner.go:130] >       "uid": {
	I0919 17:09:48.056862   28964 command_runner.go:130] >         "value": "65535"
	I0919 17:09:48.056868   28964 command_runner.go:130] >       },
	I0919 17:09:48.056874   28964 command_runner.go:130] >       "username": "",
	I0919 17:09:48.056883   28964 command_runner.go:130] >       "spec": null,
	I0919 17:09:48.056887   28964 command_runner.go:130] >       "pinned": false
	I0919 17:09:48.056891   28964 command_runner.go:130] >     }
	I0919 17:09:48.056895   28964 command_runner.go:130] >   ]
	I0919 17:09:48.056898   28964 command_runner.go:130] > }
	I0919 17:09:48.056995   28964 crio.go:496] all images are preloaded for cri-o runtime.
	I0919 17:09:48.057004   28964 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:09:48.057059   28964 ssh_runner.go:195] Run: crio config
	I0919 17:09:48.102065   28964 command_runner.go:130] ! time="2023-09-19 17:09:48.049416601Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0919 17:09:48.102111   28964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 17:09:48.110678   28964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 17:09:48.110700   28964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 17:09:48.110710   28964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 17:09:48.110716   28964 command_runner.go:130] > #
	I0919 17:09:48.110732   28964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 17:09:48.110742   28964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 17:09:48.110753   28964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 17:09:48.110770   28964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 17:09:48.110777   28964 command_runner.go:130] > # reload'.
	I0919 17:09:48.110792   28964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 17:09:48.110807   28964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 17:09:48.110821   28964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 17:09:48.110834   28964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 17:09:48.110844   28964 command_runner.go:130] > [crio]
	I0919 17:09:48.110857   28964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 17:09:48.110870   28964 command_runner.go:130] > # containers images, in this directory.
	I0919 17:09:48.110882   28964 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 17:09:48.110901   28964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 17:09:48.110914   28964 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 17:09:48.110928   28964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 17:09:48.110940   28964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 17:09:48.110948   28964 command_runner.go:130] > storage_driver = "overlay"
	I0919 17:09:48.110967   28964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 17:09:48.110977   28964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 17:09:48.110982   28964 command_runner.go:130] > storage_option = [
	I0919 17:09:48.110986   28964 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 17:09:48.110990   28964 command_runner.go:130] > ]
	I0919 17:09:48.110996   28964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 17:09:48.111002   28964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 17:09:48.111010   28964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 17:09:48.111019   28964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 17:09:48.111031   28964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 17:09:48.111041   28964 command_runner.go:130] > # always happen on a node reboot
	I0919 17:09:48.111048   28964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 17:09:48.111060   28964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 17:09:48.111073   28964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 17:09:48.111096   28964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 17:09:48.111110   28964 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0919 17:09:48.111117   28964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 17:09:48.111127   28964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 17:09:48.111136   28964 command_runner.go:130] > # internal_wipe = true
	I0919 17:09:48.111141   28964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 17:09:48.111150   28964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 17:09:48.111156   28964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 17:09:48.111161   28964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 17:09:48.111169   28964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 17:09:48.111173   28964 command_runner.go:130] > [crio.api]
	I0919 17:09:48.111181   28964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 17:09:48.111186   28964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 17:09:48.111193   28964 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 17:09:48.111198   28964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 17:09:48.111207   28964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 17:09:48.111215   28964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 17:09:48.111219   28964 command_runner.go:130] > # stream_port = "0"
	I0919 17:09:48.111227   28964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 17:09:48.111231   28964 command_runner.go:130] > # stream_enable_tls = false
	I0919 17:09:48.111238   28964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 17:09:48.111244   28964 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 17:09:48.111253   28964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 17:09:48.111262   28964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 17:09:48.111266   28964 command_runner.go:130] > # minutes.
	I0919 17:09:48.111270   28964 command_runner.go:130] > # stream_tls_cert = ""
	I0919 17:09:48.111278   28964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 17:09:48.111284   28964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 17:09:48.111290   28964 command_runner.go:130] > # stream_tls_key = ""
	I0919 17:09:48.111296   28964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 17:09:48.111305   28964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 17:09:48.111313   28964 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 17:09:48.111317   28964 command_runner.go:130] > # stream_tls_ca = ""
	I0919 17:09:48.111327   28964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:09:48.111331   28964 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 17:09:48.111338   28964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:09:48.111345   28964 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 17:09:48.111368   28964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 17:09:48.111376   28964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 17:09:48.111380   28964 command_runner.go:130] > [crio.runtime]
	I0919 17:09:48.111393   28964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 17:09:48.111403   28964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 17:09:48.111410   28964 command_runner.go:130] > # "nofile=1024:2048"
	I0919 17:09:48.111417   28964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 17:09:48.111424   28964 command_runner.go:130] > # default_ulimits = [
	I0919 17:09:48.111427   28964 command_runner.go:130] > # ]
	I0919 17:09:48.111433   28964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 17:09:48.111439   28964 command_runner.go:130] > # no_pivot = false
	I0919 17:09:48.111445   28964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 17:09:48.111453   28964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 17:09:48.111458   28964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 17:09:48.111466   28964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 17:09:48.111471   28964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 17:09:48.111478   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:09:48.111485   28964 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 17:09:48.111489   28964 command_runner.go:130] > # Cgroup setting for conmon
	I0919 17:09:48.111499   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 17:09:48.111506   28964 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 17:09:48.111514   28964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 17:09:48.111522   28964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 17:09:48.111528   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:09:48.111533   28964 command_runner.go:130] > conmon_env = [
	I0919 17:09:48.111539   28964 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 17:09:48.111543   28964 command_runner.go:130] > ]
	I0919 17:09:48.111548   28964 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 17:09:48.111555   28964 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 17:09:48.111561   28964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 17:09:48.111572   28964 command_runner.go:130] > # default_env = [
	I0919 17:09:48.111579   28964 command_runner.go:130] > # ]
	I0919 17:09:48.111584   28964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 17:09:48.111591   28964 command_runner.go:130] > # selinux = false
	I0919 17:09:48.111597   28964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 17:09:48.111605   28964 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 17:09:48.111611   28964 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 17:09:48.111617   28964 command_runner.go:130] > # seccomp_profile = ""
	I0919 17:09:48.111622   28964 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 17:09:48.111630   28964 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 17:09:48.111638   28964 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 17:09:48.111643   28964 command_runner.go:130] > # which might increase security.
	I0919 17:09:48.111648   28964 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 17:09:48.111655   28964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 17:09:48.111663   28964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 17:09:48.111669   28964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 17:09:48.111677   28964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 17:09:48.111683   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:09:48.111690   28964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 17:09:48.111696   28964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 17:09:48.111703   28964 command_runner.go:130] > # the cgroup blockio controller.
	I0919 17:09:48.111707   28964 command_runner.go:130] > # blockio_config_file = ""
	I0919 17:09:48.111719   28964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 17:09:48.111723   28964 command_runner.go:130] > # irqbalance daemon.
	I0919 17:09:48.111731   28964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 17:09:48.111737   28964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 17:09:48.111744   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:09:48.111751   28964 command_runner.go:130] > # rdt_config_file = ""
	I0919 17:09:48.111759   28964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 17:09:48.111763   28964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 17:09:48.111771   28964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 17:09:48.111776   28964 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 17:09:48.111784   28964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 17:09:48.111792   28964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 17:09:48.111798   28964 command_runner.go:130] > # will be added.
	I0919 17:09:48.111802   28964 command_runner.go:130] > # default_capabilities = [
	I0919 17:09:48.111808   28964 command_runner.go:130] > # 	"CHOWN",
	I0919 17:09:48.111812   28964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 17:09:48.111816   28964 command_runner.go:130] > # 	"FSETID",
	I0919 17:09:48.111819   28964 command_runner.go:130] > # 	"FOWNER",
	I0919 17:09:48.111826   28964 command_runner.go:130] > # 	"SETGID",
	I0919 17:09:48.111830   28964 command_runner.go:130] > # 	"SETUID",
	I0919 17:09:48.111834   28964 command_runner.go:130] > # 	"SETPCAP",
	I0919 17:09:48.111838   28964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 17:09:48.111842   28964 command_runner.go:130] > # 	"KILL",
	I0919 17:09:48.111847   28964 command_runner.go:130] > # ]
	I0919 17:09:48.111856   28964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 17:09:48.111862   28964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:09:48.111871   28964 command_runner.go:130] > # default_sysctls = [
	I0919 17:09:48.111877   28964 command_runner.go:130] > # ]
	I0919 17:09:48.111882   28964 command_runner.go:130] > # List of devices on the host that a
	I0919 17:09:48.111889   28964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 17:09:48.111897   28964 command_runner.go:130] > # allowed_devices = [
	I0919 17:09:48.111902   28964 command_runner.go:130] > # 	"/dev/fuse",
	I0919 17:09:48.111908   28964 command_runner.go:130] > # ]
	I0919 17:09:48.111913   28964 command_runner.go:130] > # List of additional devices. specified as
	I0919 17:09:48.111921   28964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 17:09:48.111929   28964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 17:09:48.111957   28964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:09:48.111968   28964 command_runner.go:130] > # additional_devices = [
	I0919 17:09:48.111972   28964 command_runner.go:130] > # ]
	I0919 17:09:48.111977   28964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 17:09:48.111980   28964 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 17:09:48.111988   28964 command_runner.go:130] > # 	"/etc/cdi",
	I0919 17:09:48.111995   28964 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 17:09:48.111998   28964 command_runner.go:130] > # ]
	I0919 17:09:48.112004   28964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 17:09:48.112012   28964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 17:09:48.112017   28964 command_runner.go:130] > # Defaults to false.
	I0919 17:09:48.112021   28964 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 17:09:48.112030   28964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 17:09:48.112036   28964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 17:09:48.112042   28964 command_runner.go:130] > # hooks_dir = [
	I0919 17:09:48.112048   28964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 17:09:48.112055   28964 command_runner.go:130] > # ]
	I0919 17:09:48.112061   28964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 17:09:48.112071   28964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 17:09:48.112079   28964 command_runner.go:130] > # its default mounts from the following two files:
	I0919 17:09:48.112082   28964 command_runner.go:130] > #
	I0919 17:09:48.112090   28964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 17:09:48.112098   28964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 17:09:48.112106   28964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 17:09:48.112113   28964 command_runner.go:130] > #
	I0919 17:09:48.112118   28964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 17:09:48.112125   28964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 17:09:48.112134   28964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 17:09:48.112139   28964 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 17:09:48.112144   28964 command_runner.go:130] > #
	I0919 17:09:48.112148   28964 command_runner.go:130] > # default_mounts_file = ""
	I0919 17:09:48.112153   28964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 17:09:48.112160   28964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 17:09:48.112166   28964 command_runner.go:130] > pids_limit = 1024
	I0919 17:09:48.112172   28964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 17:09:48.112181   28964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 17:09:48.112187   28964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 17:09:48.112197   28964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 17:09:48.112203   28964 command_runner.go:130] > # log_size_max = -1
	I0919 17:09:48.112209   28964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0919 17:09:48.112216   28964 command_runner.go:130] > # log_to_journald = false
	I0919 17:09:48.112226   28964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 17:09:48.112233   28964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 17:09:48.112238   28964 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 17:09:48.112245   28964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 17:09:48.112251   28964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 17:09:48.112257   28964 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 17:09:48.112263   28964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 17:09:48.112269   28964 command_runner.go:130] > # read_only = false
	I0919 17:09:48.112275   28964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 17:09:48.112283   28964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 17:09:48.112288   28964 command_runner.go:130] > # live configuration reload.
	I0919 17:09:48.112292   28964 command_runner.go:130] > # log_level = "info"
	I0919 17:09:48.112299   28964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 17:09:48.112304   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:09:48.112308   28964 command_runner.go:130] > # log_filter = ""
	I0919 17:09:48.112314   28964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 17:09:48.112321   28964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 17:09:48.112331   28964 command_runner.go:130] > # separated by comma.
	I0919 17:09:48.112337   28964 command_runner.go:130] > # uid_mappings = ""
	I0919 17:09:48.112343   28964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 17:09:48.112352   28964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 17:09:48.112356   28964 command_runner.go:130] > # separated by comma.
	I0919 17:09:48.112362   28964 command_runner.go:130] > # gid_mappings = ""
	I0919 17:09:48.112368   28964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 17:09:48.112376   28964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:09:48.112388   28964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:09:48.112395   28964 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 17:09:48.112401   28964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 17:09:48.112431   28964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:09:48.112445   28964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:09:48.112452   28964 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 17:09:48.112458   28964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 17:09:48.112466   28964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 17:09:48.112472   28964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 17:09:48.112479   28964 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 17:09:48.112484   28964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 17:09:48.112496   28964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 17:09:48.112504   28964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 17:09:48.112508   28964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 17:09:48.112517   28964 command_runner.go:130] > drop_infra_ctr = false
	I0919 17:09:48.112523   28964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 17:09:48.112531   28964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 17:09:48.112538   28964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 17:09:48.112545   28964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 17:09:48.112551   28964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 17:09:48.112558   28964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 17:09:48.112563   28964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 17:09:48.112575   28964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 17:09:48.112582   28964 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 17:09:48.112589   28964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 17:09:48.112598   28964 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0919 17:09:48.112606   28964 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0919 17:09:48.112611   28964 command_runner.go:130] > # default_runtime = "runc"
	I0919 17:09:48.112618   28964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 17:09:48.112627   28964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 17:09:48.112638   28964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0919 17:09:48.112644   28964 command_runner.go:130] > # creation as a file is not desired either.
	I0919 17:09:48.112652   28964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 17:09:48.112660   28964 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 17:09:48.112664   28964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 17:09:48.112668   28964 command_runner.go:130] > # ]
	I0919 17:09:48.112675   28964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 17:09:48.112685   28964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 17:09:48.112692   28964 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0919 17:09:48.112702   28964 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0919 17:09:48.112708   28964 command_runner.go:130] > #
	I0919 17:09:48.112712   28964 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0919 17:09:48.112718   28964 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0919 17:09:48.112722   28964 command_runner.go:130] > #  runtime_type = "oci"
	I0919 17:09:48.112727   28964 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0919 17:09:48.112734   28964 command_runner.go:130] > #  privileged_without_host_devices = false
	I0919 17:09:48.112738   28964 command_runner.go:130] > #  allowed_annotations = []
	I0919 17:09:48.112747   28964 command_runner.go:130] > # Where:
	I0919 17:09:48.112753   28964 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0919 17:09:48.112761   28964 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0919 17:09:48.112768   28964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 17:09:48.112776   28964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 17:09:48.112782   28964 command_runner.go:130] > #   in $PATH.
	I0919 17:09:48.112790   28964 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0919 17:09:48.112795   28964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 17:09:48.112804   28964 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0919 17:09:48.112808   28964 command_runner.go:130] > #   state.
	I0919 17:09:48.112816   28964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 17:09:48.112822   28964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 17:09:48.112830   28964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 17:09:48.112836   28964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 17:09:48.112845   28964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 17:09:48.112851   28964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 17:09:48.112858   28964 command_runner.go:130] > #   The currently recognized values are:
	I0919 17:09:48.112865   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 17:09:48.112876   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 17:09:48.112885   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 17:09:48.112891   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 17:09:48.112901   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 17:09:48.112910   28964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 17:09:48.112916   28964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 17:09:48.112922   28964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0919 17:09:48.112929   28964 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 17:09:48.112934   28964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 17:09:48.112940   28964 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 17:09:48.112944   28964 command_runner.go:130] > runtime_type = "oci"
	I0919 17:09:48.112949   28964 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 17:09:48.112956   28964 command_runner.go:130] > runtime_config_path = ""
	I0919 17:09:48.112960   28964 command_runner.go:130] > monitor_path = ""
	I0919 17:09:48.112965   28964 command_runner.go:130] > monitor_cgroup = ""
	I0919 17:09:48.112970   28964 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 17:09:48.112976   28964 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0919 17:09:48.112982   28964 command_runner.go:130] > # running containers
	I0919 17:09:48.112989   28964 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0919 17:09:48.112998   28964 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0919 17:09:48.113042   28964 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0919 17:09:48.113051   28964 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0919 17:09:48.113056   28964 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0919 17:09:48.113061   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0919 17:09:48.113067   28964 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0919 17:09:48.113072   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0919 17:09:48.113077   28964 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0919 17:09:48.113084   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0919 17:09:48.113090   28964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 17:09:48.113098   28964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 17:09:48.113104   28964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 17:09:48.113113   28964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 17:09:48.113121   28964 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 17:09:48.113129   28964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 17:09:48.113141   28964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 17:09:48.113151   28964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 17:09:48.113159   28964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 17:09:48.113166   28964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 17:09:48.113172   28964 command_runner.go:130] > # Example:
	I0919 17:09:48.113176   28964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 17:09:48.113183   28964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 17:09:48.113190   28964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 17:09:48.113198   28964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 17:09:48.113202   28964 command_runner.go:130] > # cpuset = 0
	I0919 17:09:48.113208   28964 command_runner.go:130] > # cpushares = "0-1"
	I0919 17:09:48.113212   28964 command_runner.go:130] > # Where:
	I0919 17:09:48.113216   28964 command_runner.go:130] > # The workload name is workload-type.
	I0919 17:09:48.113228   28964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 17:09:48.113236   28964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 17:09:48.113242   28964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 17:09:48.113252   28964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 17:09:48.113257   28964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 17:09:48.113263   28964 command_runner.go:130] > # 
	I0919 17:09:48.113269   28964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 17:09:48.113279   28964 command_runner.go:130] > #
	I0919 17:09:48.113287   28964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 17:09:48.113293   28964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 17:09:48.113302   28964 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 17:09:48.113308   28964 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 17:09:48.113315   28964 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 17:09:48.113319   28964 command_runner.go:130] > [crio.image]
	I0919 17:09:48.113328   28964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 17:09:48.113333   28964 command_runner.go:130] > # default_transport = "docker://"
	I0919 17:09:48.113341   28964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 17:09:48.113347   28964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:09:48.113351   28964 command_runner.go:130] > # global_auth_file = ""
	I0919 17:09:48.113356   28964 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 17:09:48.113364   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:09:48.113369   28964 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0919 17:09:48.113376   28964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 17:09:48.113383   28964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:09:48.113388   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:09:48.113396   28964 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 17:09:48.113404   28964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 17:09:48.113411   28964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 17:09:48.113419   28964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 17:09:48.113425   28964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 17:09:48.113432   28964 command_runner.go:130] > # pause_command = "/pause"
	I0919 17:09:48.113438   28964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 17:09:48.113445   28964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 17:09:48.113451   28964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 17:09:48.113459   28964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 17:09:48.113465   28964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 17:09:48.113469   28964 command_runner.go:130] > # signature_policy = ""
	I0919 17:09:48.113474   28964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 17:09:48.113480   28964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 17:09:48.113484   28964 command_runner.go:130] > # changing them here.
	I0919 17:09:48.113488   28964 command_runner.go:130] > # insecure_registries = [
	I0919 17:09:48.113491   28964 command_runner.go:130] > # ]
	I0919 17:09:48.113499   28964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 17:09:48.113506   28964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 17:09:48.113510   28964 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 17:09:48.113515   28964 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 17:09:48.113519   28964 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 17:09:48.113525   28964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 17:09:48.113529   28964 command_runner.go:130] > # CNI plugins.
	I0919 17:09:48.113532   28964 command_runner.go:130] > [crio.network]
	I0919 17:09:48.113538   28964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 17:09:48.113543   28964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 17:09:48.113547   28964 command_runner.go:130] > # cni_default_network = ""
	I0919 17:09:48.113552   28964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 17:09:48.113556   28964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 17:09:48.113562   28964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 17:09:48.113569   28964 command_runner.go:130] > # plugin_dirs = [
	I0919 17:09:48.113573   28964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 17:09:48.113576   28964 command_runner.go:130] > # ]
	I0919 17:09:48.113582   28964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 17:09:48.113586   28964 command_runner.go:130] > [crio.metrics]
	I0919 17:09:48.113597   28964 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 17:09:48.113603   28964 command_runner.go:130] > enable_metrics = true
	I0919 17:09:48.113610   28964 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 17:09:48.113616   28964 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 17:09:48.113625   28964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 17:09:48.113635   28964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 17:09:48.113642   28964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 17:09:48.113649   28964 command_runner.go:130] > # metrics_collectors = [
	I0919 17:09:48.113653   28964 command_runner.go:130] > # 	"operations",
	I0919 17:09:48.113658   28964 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 17:09:48.113662   28964 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 17:09:48.113666   28964 command_runner.go:130] > # 	"operations_errors",
	I0919 17:09:48.113670   28964 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 17:09:48.113674   28964 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 17:09:48.113679   28964 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 17:09:48.113683   28964 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 17:09:48.113687   28964 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 17:09:48.113695   28964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 17:09:48.113702   28964 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 17:09:48.113709   28964 command_runner.go:130] > # 	"containers_oom_total",
	I0919 17:09:48.113713   28964 command_runner.go:130] > # 	"containers_oom",
	I0919 17:09:48.113717   28964 command_runner.go:130] > # 	"processes_defunct",
	I0919 17:09:48.113721   28964 command_runner.go:130] > # 	"operations_total",
	I0919 17:09:48.113725   28964 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 17:09:48.113733   28964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 17:09:48.113737   28964 command_runner.go:130] > # 	"operations_errors_total",
	I0919 17:09:48.113741   28964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 17:09:48.113748   28964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 17:09:48.113753   28964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 17:09:48.113758   28964 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 17:09:48.113762   28964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 17:09:48.113769   28964 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 17:09:48.113772   28964 command_runner.go:130] > # ]
	I0919 17:09:48.113780   28964 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 17:09:48.113784   28964 command_runner.go:130] > # metrics_port = 9090
	I0919 17:09:48.113791   28964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 17:09:48.113798   28964 command_runner.go:130] > # metrics_socket = ""
	I0919 17:09:48.113805   28964 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 17:09:48.113811   28964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 17:09:48.113819   28964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 17:09:48.113824   28964 command_runner.go:130] > # certificate on any modification event.
	I0919 17:09:48.113830   28964 command_runner.go:130] > # metrics_cert = ""
	I0919 17:09:48.113835   28964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 17:09:48.113841   28964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 17:09:48.113846   28964 command_runner.go:130] > # metrics_key = ""
	I0919 17:09:48.113853   28964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 17:09:48.113857   28964 command_runner.go:130] > [crio.tracing]
	I0919 17:09:48.113863   28964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 17:09:48.113869   28964 command_runner.go:130] > # enable_tracing = false
	I0919 17:09:48.113874   28964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 17:09:48.113881   28964 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 17:09:48.113887   28964 command_runner.go:130] > # Number of samples to collect per million spans.
	I0919 17:09:48.113894   28964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 17:09:48.113900   28964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 17:09:48.113906   28964 command_runner.go:130] > [crio.stats]
	I0919 17:09:48.113914   28964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 17:09:48.113919   28964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 17:09:48.113926   28964 command_runner.go:130] > # stats_collection_period = 0
	I0919 17:09:48.114001   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:09:48.114010   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:09:48.114026   28964 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:09:48.114042   28964 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.38 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553715 NodeName:multinode-553715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:09:48.114181   28964 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553715"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:09:48.114240   28964 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:09:48.114299   28964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:09:48.124133   28964 command_runner.go:130] > kubeadm
	I0919 17:09:48.124150   28964 command_runner.go:130] > kubectl
	I0919 17:09:48.124154   28964 command_runner.go:130] > kubelet
	I0919 17:09:48.124170   28964 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:09:48.124209   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:09:48.133023   28964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0919 17:09:48.149075   28964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:09:48.164401   28964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0919 17:09:48.180121   28964 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0919 17:09:48.183519   28964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:09:48.194475   28964 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715 for IP: 192.168.39.38
	I0919 17:09:48.194502   28964 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:09:48.194645   28964 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:09:48.194691   28964 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:09:48.194780   28964 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key
	I0919 17:09:48.194852   28964 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key.383c1efe
	I0919 17:09:48.194908   28964 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key
	I0919 17:09:48.194921   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 17:09:48.194944   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 17:09:48.194965   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 17:09:48.194988   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 17:09:48.195003   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 17:09:48.195022   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 17:09:48.195040   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 17:09:48.195058   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 17:09:48.195127   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:09:48.195176   28964 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:09:48.195190   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:09:48.195218   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:09:48.195250   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:09:48.195282   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:09:48.195337   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:09:48.195380   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /usr/share/ca-certificates/132392.pem
	I0919 17:09:48.195401   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:09:48.195418   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem -> /usr/share/ca-certificates/13239.pem
	I0919 17:09:48.196032   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:09:48.218666   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 17:09:48.239660   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:09:48.260828   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 17:09:48.282261   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:09:48.302960   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:09:48.324848   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:09:48.346085   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:09:48.367826   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:09:48.389232   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:09:48.410108   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:09:48.431043   28964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:09:48.446410   28964 ssh_runner.go:195] Run: openssl version
	I0919 17:09:48.451471   28964 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 17:09:48.451517   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:09:48.461651   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:09:48.465833   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:09:48.466008   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:09:48.466052   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:09:48.471269   28964 command_runner.go:130] > 3ec20f2e
	I0919 17:09:48.471308   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:09:48.481154   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:09:48.491189   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:09:48.495403   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:09:48.495439   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:09:48.495472   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:09:48.500273   28964 command_runner.go:130] > b5213941
	I0919 17:09:48.500630   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:09:48.510674   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:09:48.520514   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:09:48.524670   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:09:48.524763   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:09:48.524825   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:09:48.529978   28964 command_runner.go:130] > 51391683
	I0919 17:09:48.530026   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:09:48.539941   28964 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:09:48.545182   28964 command_runner.go:130] > ca.crt
	I0919 17:09:48.545200   28964 command_runner.go:130] > ca.key
	I0919 17:09:48.545209   28964 command_runner.go:130] > healthcheck-client.crt
	I0919 17:09:48.545217   28964 command_runner.go:130] > healthcheck-client.key
	I0919 17:09:48.545229   28964 command_runner.go:130] > peer.crt
	I0919 17:09:48.545240   28964 command_runner.go:130] > peer.key
	I0919 17:09:48.545249   28964 command_runner.go:130] > server.crt
	I0919 17:09:48.545257   28964 command_runner.go:130] > server.key
	I0919 17:09:48.545333   28964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:09:48.551679   28964 command_runner.go:130] > Certificate will not expire
	I0919 17:09:48.552140   28964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:09:48.557538   28964 command_runner.go:130] > Certificate will not expire
	I0919 17:09:48.557990   28964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:09:48.563176   28964 command_runner.go:130] > Certificate will not expire
	I0919 17:09:48.563221   28964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:09:48.568629   28964 command_runner.go:130] > Certificate will not expire
	I0919 17:09:48.569060   28964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:09:48.574895   28964 command_runner.go:130] > Certificate will not expire
	I0919 17:09:48.574948   28964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:09:48.580431   28964 command_runner.go:130] > Certificate will not expire
	I0919 17:09:48.580701   28964 kubeadm.go:404] StartCluster: {Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:09:48.580829   28964 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:09:48.580861   28964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:09:48.626082   28964 cri.go:89] found id: ""
	I0919 17:09:48.626139   28964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:09:48.636388   28964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0919 17:09:48.636425   28964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0919 17:09:48.636435   28964 command_runner.go:130] > /var/lib/minikube/etcd:
	I0919 17:09:48.636441   28964 command_runner.go:130] > member
	I0919 17:09:48.636459   28964 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:09:48.636470   28964 kubeadm.go:636] restartCluster start
	I0919 17:09:48.636522   28964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:09:48.646084   28964 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:48.646695   28964 kubeconfig.go:92] found "multinode-553715" server: "https://192.168.39.38:8443"
	I0919 17:09:48.647143   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:09:48.647369   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:09:48.647968   28964 cert_rotation.go:137] Starting client certificate rotation controller
	I0919 17:09:48.648102   28964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:09:48.657454   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:48.657507   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:48.670136   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:48.670151   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:48.670190   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:48.681372   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:49.182058   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:49.533860   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:49.545762   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:49.682110   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:49.682195   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:49.694173   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:50.181591   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:50.181652   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:50.194046   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:50.681590   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:50.681659   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:50.693061   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:51.181669   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:51.181753   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:51.193783   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:51.682435   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:51.682524   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:51.696253   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:52.181832   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:52.181894   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:52.193963   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:52.681474   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:52.681578   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:52.693485   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:53.182116   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:53.182193   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:53.194216   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:53.681769   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:53.681849   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:53.694369   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:54.182320   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:54.182410   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:54.194412   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:54.681823   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:54.681907   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:54.694017   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:55.181494   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:55.181558   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:55.193106   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:55.681686   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:55.681758   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:55.693827   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:56.182378   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:56.182455   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:56.195759   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:56.682417   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:56.682536   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:56.695697   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:57.182329   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:57.182419   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:57.194355   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:57.681917   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:57.682018   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:57.694307   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:58.181809   28964 api_server.go:166] Checking apiserver status ...
	I0919 17:09:58.181893   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:09:58.193613   28964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:09:58.658333   28964 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:09:58.658359   28964 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:09:58.658400   28964 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 17:09:58.658456   28964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:09:58.700364   28964 cri.go:89] found id: ""
	I0919 17:09:58.700476   28964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:09:58.716347   28964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:09:58.725346   28964 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0919 17:09:58.725377   28964 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0919 17:09:58.725388   28964 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0919 17:09:58.725421   28964 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:09:58.725696   28964 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:09:58.725749   28964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:09:58.734878   28964 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:09:58.734904   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:09:58.857325   28964 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:09:58.857835   28964 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0919 17:09:58.858330   28964 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0919 17:09:58.858898   28964 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:09:58.859745   28964 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0919 17:09:58.860262   28964 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:09:58.861245   28964 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0919 17:09:58.861846   28964 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0919 17:09:58.862331   28964 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:09:58.862909   28964 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:09:58.863387   28964 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:09:58.864129   28964 command_runner.go:130] > [certs] Using the existing "sa" key
	I0919 17:09:58.865821   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:09:59.428497   28964 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:09:59.428521   28964 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:09:59.428527   28964 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:09:59.428535   28964 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:09:59.428545   28964 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:09:59.428578   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:09:59.624827   28964 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:09:59.624850   28964 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:09:59.624859   28964 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 17:09:59.624883   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:09:59.699119   28964 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:09:59.699145   28964 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:09:59.702099   28964 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:09:59.703157   28964 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:09:59.707077   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:09:59.783493   28964 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:09:59.783534   28964 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:09:59.783596   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:09:59.797889   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:10:00.310910   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:10:00.810890   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:10:01.311822   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:10:01.811831   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:10:01.841528   28964 command_runner.go:130] > 1074
	I0919 17:10:01.841787   28964 api_server.go:72] duration metric: took 2.058249041s to wait for apiserver process to appear ...
	I0919 17:10:01.841807   28964 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:10:01.841823   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:01.842344   28964 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0919 17:10:01.842381   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:01.842822   28964 api_server.go:269] stopped: https://192.168.39.38:8443/healthz: Get "https://192.168.39.38:8443/healthz": dial tcp 192.168.39.38:8443: connect: connection refused
	I0919 17:10:02.342887   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:06.021242   28964 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:10:06.021270   28964 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:10:06.021281   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:06.055814   28964 api_server.go:279] https://192.168.39.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:10:06.055845   28964 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:10:06.343019   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:06.352249   28964 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:10:06.352281   28964 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:10:06.843891   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:06.848697   28964 api_server.go:279] https://192.168.39.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:10:06.848727   28964 api_server.go:103] status: https://192.168.39.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:10:07.343444   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:07.350823   28964 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0919 17:10:07.350885   28964 round_trippers.go:463] GET https://192.168.39.38:8443/version
	I0919 17:10:07.350890   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:07.350899   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:07.350908   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:07.359917   28964 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0919 17:10:07.359935   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:07.359941   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:07.359948   28964 round_trippers.go:580]     Content-Length: 263
	I0919 17:10:07.359952   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:07 GMT
	I0919 17:10:07.359957   28964 round_trippers.go:580]     Audit-Id: 0932d9ad-3f4f-4a33-98d7-9347799a5891
	I0919 17:10:07.359962   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:07.359967   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:07.359972   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:07.359999   28964 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0919 17:10:07.360080   28964 api_server.go:141] control plane version: v1.28.2
	I0919 17:10:07.360096   28964 api_server.go:131] duration metric: took 5.518284765s to wait for apiserver health ...
	I0919 17:10:07.360104   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:10:07.360109   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:10:07.362317   28964 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 17:10:07.363897   28964 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 17:10:07.378634   28964 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 17:10:07.378658   28964 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 17:10:07.378667   28964 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 17:10:07.378673   28964 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:10:07.378679   28964 command_runner.go:130] > Access: 2023-09-19 17:09:35.017363855 +0000
	I0919 17:10:07.378684   28964 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 17:10:07.378689   28964 command_runner.go:130] > Change: 2023-09-19 17:09:33.166363855 +0000
	I0919 17:10:07.378693   28964 command_runner.go:130] >  Birth: -
	I0919 17:10:07.378897   28964 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 17:10:07.378914   28964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 17:10:07.410297   28964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 17:10:08.574239   28964 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:10:08.578490   28964 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:10:08.592534   28964 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0919 17:10:08.612446   28964 command_runner.go:130] > daemonset.apps/kindnet configured
	I0919 17:10:08.615051   28964 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.204717857s)
	I0919 17:10:08.615085   28964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:10:08.615192   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:08.615206   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.615216   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.615226   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.619296   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:08.619325   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.619336   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.619344   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.619353   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.619361   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.619380   28964 round_trippers.go:580]     Audit-Id: 2c6113b0-61db-4b88-a1ff-307ad8d18bb8
	I0919 17:10:08.619389   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.620959   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83099 chars]
	I0919 17:10:08.624784   28964 system_pods.go:59] 12 kube-system pods found
	I0919 17:10:08.624810   28964 system_pods.go:61] "coredns-5dd5756b68-pffkm" [fbc226fb-43a9-4e0f-ac99-614f2740485d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:10:08.624817   28964 system_pods.go:61] "etcd-multinode-553715" [905a0370-ab9d-4138-bd11-12297717f1c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 17:10:08.624824   28964 system_pods.go:61] "kindnet-ccllv" [efcfebd2-47e1-4d7f-8ca8-16dda13542e8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 17:10:08.624835   28964 system_pods.go:61] "kindnet-lmmc5" [2479ec2b-6cd3-4fb2-b85f-43b175cfbb79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 17:10:08.624839   28964 system_pods.go:61] "kindnet-s8d6g" [e9d94488-d64b-437b-9f06-512b355c2598] Running
	I0919 17:10:08.624848   28964 system_pods.go:61] "kube-apiserver-multinode-553715" [e2712b6a-6771-4fb1-9b6d-e50e10e45411] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 17:10:08.624856   28964 system_pods.go:61] "kube-controller-manager-multinode-553715" [56eb8685-d2ae-4f50-8da1-dca616585190] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 17:10:08.624864   28964 system_pods.go:61] "kube-proxy-d5vl8" [88ab05d6-264f-40d8-9c55-c58829613212] Running
	I0919 17:10:08.624869   28964 system_pods.go:61] "kube-proxy-gnjwl" [86e13bd9-e0df-4a0b-b9a7-1746bb37c23b] Running
	I0919 17:10:08.624876   28964 system_pods.go:61] "kube-proxy-tvcz9" [377d6478-cda2-47b9-8af8-cff3064e8524] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 17:10:08.624882   28964 system_pods.go:61] "kube-scheduler-multinode-553715" [27c15070-fba4-4237-b6d2-4727af1e5809] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 17:10:08.624888   28964 system_pods.go:61] "storage-provisioner" [6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:10:08.624896   28964 system_pods.go:74] duration metric: took 9.804511ms to wait for pod list to return data ...
	I0919 17:10:08.624905   28964 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:10:08.624953   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I0919 17:10:08.624960   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.624968   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.624976   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.628854   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:08.628873   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.628883   28964 round_trippers.go:580]     Audit-Id: ed7f4db3-afee-461d-8dce-d214069399da
	I0919 17:10:08.628892   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.628901   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.628909   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.628933   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.628945   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.629964   28964 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15370 chars]
	I0919 17:10:08.631044   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:10:08.631077   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:10:08.631096   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:10:08.631103   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:10:08.631112   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:10:08.631122   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:10:08.631130   28964 node_conditions.go:105] duration metric: took 6.220398ms to run NodePressure ...
	I0919 17:10:08.631152   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:10:08.924918   28964 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0919 17:10:08.924937   28964 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0919 17:10:08.924960   28964 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:10:08.925047   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0919 17:10:08.925061   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.925068   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.925074   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.929087   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:08.929107   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.929114   28964 round_trippers.go:580]     Audit-Id: a37772b8-7e09-441c-9eaa-2b60ffea437b
	I0919 17:10:08.929119   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.929124   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.929130   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.929138   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.929146   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.929849   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"823"},"items":[{"metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0919 17:10:08.930780   28964 kubeadm.go:787] kubelet initialised
	I0919 17:10:08.930793   28964 kubeadm.go:788] duration metric: took 5.825936ms waiting for restarted kubelet to initialise ...
	I0919 17:10:08.930799   28964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:10:08.930848   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:08.930855   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.930862   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.930871   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.935092   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:08.935107   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.935112   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.935118   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.935123   28964 round_trippers.go:580]     Audit-Id: b07421ee-dc9e-4dc6-9d3b-e2959d894be2
	I0919 17:10:08.935128   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.935133   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.935141   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.936757   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"823"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83099 chars]
	I0919 17:10:08.939323   28964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:08.939385   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:08.939392   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.939399   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.939405   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.941572   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:08.941585   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.941591   28964 round_trippers.go:580]     Audit-Id: bed3e1df-eef2-4a5a-a768-aee4c456d3f9
	I0919 17:10:08.941596   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.941601   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.941606   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.941611   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.941619   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.941907   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:08.942262   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:08.942272   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.942278   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.942285   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.947293   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:08.947310   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.947320   28964 round_trippers.go:580]     Audit-Id: 0d5dfe76-96d1-4175-8e6f-337c1b7f101d
	I0919 17:10:08.947331   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.947340   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.947349   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.947354   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.947360   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.947955   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:08.948217   28964 pod_ready.go:97] node "multinode-553715" hosting pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:08.948232   28964 pod_ready.go:81] duration metric: took 8.891398ms waiting for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	E0919 17:10:08.948239   28964 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553715" hosting pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:08.948246   28964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:08.948285   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:08.948293   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.948299   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.948304   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.955162   28964 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 17:10:08.955181   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.955188   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.955194   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.955199   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.955204   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.955209   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.955214   28964 round_trippers.go:580]     Audit-Id: a294b000-9484-4822-b36c-ed076aa0f125
	I0919 17:10:08.960220   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:08.960601   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:08.960613   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.960623   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.960632   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.966330   28964 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 17:10:08.966350   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.966360   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.966366   28964 round_trippers.go:580]     Audit-Id: b590a77f-cc3e-417f-8c42-edcad507d2a9
	I0919 17:10:08.966372   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.966377   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.966382   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.966387   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.968308   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:08.968620   28964 pod_ready.go:97] node "multinode-553715" hosting pod "etcd-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:08.968636   28964 pod_ready.go:81] duration metric: took 20.381903ms waiting for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	E0919 17:10:08.968644   28964 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553715" hosting pod "etcd-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:08.968662   28964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:08.968710   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553715
	I0919 17:10:08.968718   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.968725   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.968731   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.973257   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:08.973270   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.973276   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.973281   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.973286   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.973291   28964 round_trippers.go:580]     Audit-Id: 1d45d1fe-6910-449a-9fb0-6c32d5fbbcfe
	I0919 17:10:08.973297   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.973305   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.973469   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553715","namespace":"kube-system","uid":"e2712b6a-6771-4fb1-9b6d-e50e10e45411","resourceVersion":"802","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.mirror":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.seen":"2023-09-19T16:59:41.749099288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0919 17:10:08.973893   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:08.973905   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.973911   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.973917   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.984164   28964 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 17:10:08.984185   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.984192   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.984197   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.984203   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.984208   28964 round_trippers.go:580]     Audit-Id: 1b7ea0bf-df87-44ac-8b5a-03927ae57acd
	I0919 17:10:08.984212   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.984219   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.984598   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:08.984882   28964 pod_ready.go:97] node "multinode-553715" hosting pod "kube-apiserver-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:08.984897   28964 pod_ready.go:81] duration metric: took 16.224614ms waiting for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	E0919 17:10:08.984905   28964 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553715" hosting pod "kube-apiserver-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:08.984913   28964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:08.984964   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553715
	I0919 17:10:08.984973   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:08.984980   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:08.984985   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:08.987359   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:08.987376   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:08.987382   28964 round_trippers.go:580]     Audit-Id: d62c5dcc-fef7-458c-bd01-990ec4c6dc6f
	I0919 17:10:08.987388   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:08.987393   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:08.987398   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:08.987403   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:08.987408   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:08.988039   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553715","namespace":"kube-system","uid":"56eb8685-d2ae-4f50-8da1-dca616585190","resourceVersion":"803","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.mirror":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.seen":"2023-09-19T16:59:41.749100351Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0919 17:10:09.015647   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:09.015668   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:09.015676   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:09.015682   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:09.018502   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:09.018520   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:09.018526   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:09.018531   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:08 GMT
	I0919 17:10:09.018536   28964 round_trippers.go:580]     Audit-Id: ba2f41eb-4ca2-40b2-b2c9-cff5d1638f2b
	I0919 17:10:09.018541   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:09.018548   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:09.018554   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:09.018704   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:09.018995   28964 pod_ready.go:97] node "multinode-553715" hosting pod "kube-controller-manager-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:09.019010   28964 pod_ready.go:81] duration metric: took 34.089797ms waiting for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	E0919 17:10:09.019019   28964 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553715" hosting pod "kube-controller-manager-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:09.019026   28964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:09.216270   28964 request.go:629] Waited for 197.18023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:10:09.216332   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:10:09.216338   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:09.216345   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:09.216353   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:09.219257   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:09.219274   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:09.219281   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:09.219286   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:09 GMT
	I0919 17:10:09.219293   28964 round_trippers.go:580]     Audit-Id: 099b8275-dd10-4710-8437-e8bb61f5778d
	I0919 17:10:09.219302   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:09.219310   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:09.219320   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:09.219636   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"503","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0919 17:10:09.415863   28964 request.go:629] Waited for 195.841757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:10:09.415928   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:10:09.415936   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:09.415946   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:09.415963   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:09.419750   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:09.419774   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:09.419784   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:09 GMT
	I0919 17:10:09.419792   28964 round_trippers.go:580]     Audit-Id: da9bc60a-7810-4400-bba3-ef68d23705ff
	I0919 17:10:09.419800   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:09.419814   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:09.419821   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:09.419830   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:09.420045   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"733","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I0919 17:10:09.420319   28964 pod_ready.go:92] pod "kube-proxy-d5vl8" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:09.420332   28964 pod_ready.go:81] duration metric: took 401.297439ms waiting for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:09.420341   28964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:09.615717   28964 request.go:629] Waited for 195.313195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:10:09.615790   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:10:09.615795   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:09.615804   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:09.615810   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:09.620902   28964 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 17:10:09.620925   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:09.620933   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:09.620938   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:09 GMT
	I0919 17:10:09.620944   28964 round_trippers.go:580]     Audit-Id: 9184500d-d55b-4415-a07e-743019fb6712
	I0919 17:10:09.620949   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:09.620954   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:09.620959   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:09.621471   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gnjwl","generateName":"kube-proxy-","namespace":"kube-system","uid":"86e13bd9-e0df-4a0b-b9a7-1746bb37c23b","resourceVersion":"708","creationTimestamp":"2023-09-19T17:01:27Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0919 17:10:09.816277   28964 request.go:629] Waited for 194.407475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:10:09.816358   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:10:09.816368   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:09.816380   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:09.816393   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:09.819203   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:09.819225   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:09.819232   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:09.819240   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:09.819245   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:09.819251   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:09.819256   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:09 GMT
	I0919 17:10:09.819263   28964 round_trippers.go:580]     Audit-Id: 43d703cd-0922-4cd4-92ac-b0f2ddd5be12
	I0919 17:10:09.819532   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m03","uid":"f3827816-de3c-418e-aa24-505b515ee53b","resourceVersion":"737","creationTimestamp":"2023-09-19T17:02:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I0919 17:10:09.819860   28964 pod_ready.go:92] pod "kube-proxy-gnjwl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:09.819877   28964 pod_ready.go:81] duration metric: took 399.529015ms waiting for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:09.819889   28964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:10.015248   28964 request.go:629] Waited for 195.301472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:10:10.015308   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:10:10.015314   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:10.015321   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:10.015327   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:10.017873   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:10.017894   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:10.017907   28964 round_trippers.go:580]     Audit-Id: 14098fc5-07ac-487d-95d0-d0cb289d3492
	I0919 17:10:10.017915   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:10.017922   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:10.017929   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:10.017936   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:10.017944   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:09 GMT
	I0919 17:10:10.018215   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvcz9","generateName":"kube-proxy-","namespace":"kube-system","uid":"377d6478-cda2-47b9-8af8-cff3064e8524","resourceVersion":"825","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0919 17:10:10.215972   28964 request.go:629] Waited for 197.37481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:10.216053   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:10.216061   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:10.216072   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:10.216085   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:10.218942   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:10.218965   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:10.218979   28964 round_trippers.go:580]     Audit-Id: 4221f71c-43c6-4c1e-b149-90aab120ed6c
	I0919 17:10:10.218988   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:10.218996   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:10.219004   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:10.219012   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:10.219021   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:10 GMT
	I0919 17:10:10.220173   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:10.220540   28964 pod_ready.go:97] node "multinode-553715" hosting pod "kube-proxy-tvcz9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:10.220563   28964 pod_ready.go:81] duration metric: took 400.667569ms waiting for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	E0919 17:10:10.220574   28964 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553715" hosting pod "kube-proxy-tvcz9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:10.220584   28964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:10.415989   28964 request.go:629] Waited for 195.322195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:10:10.416053   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:10:10.416062   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:10.416073   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:10.416087   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:10.420135   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:10.420158   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:10.420169   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:10.420178   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:10.420185   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:10 GMT
	I0919 17:10:10.420201   28964 round_trippers.go:580]     Audit-Id: 86f4ee3f-2fdd-435a-a066-982081ac6e74
	I0919 17:10:10.420208   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:10.420215   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:10.420602   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553715","namespace":"kube-system","uid":"27c15070-fba4-4237-b6d2-4727af1e5809","resourceVersion":"805","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.mirror":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.seen":"2023-09-19T16:59:41.749088169Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0919 17:10:10.615285   28964 request.go:629] Waited for 194.335823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:10.615363   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:10.615370   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:10.615381   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:10.615390   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:10.619452   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:10.619471   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:10.619481   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:10.619489   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:10.619497   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:10.619505   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:10 GMT
	I0919 17:10:10.619514   28964 round_trippers.go:580]     Audit-Id: 1be9b441-614a-4d8c-a860-bb1cdc343f24
	I0919 17:10:10.619522   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:10.620265   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:10.620600   28964 pod_ready.go:97] node "multinode-553715" hosting pod "kube-scheduler-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:10.620619   28964 pod_ready.go:81] duration metric: took 400.021095ms waiting for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	E0919 17:10:10.620628   28964 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-553715" hosting pod "kube-scheduler-multinode-553715" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-553715" has status "Ready":"False"
	I0919 17:10:10.620638   28964 pod_ready.go:38] duration metric: took 1.689829047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:10:10.620653   28964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:10:10.633227   28964 command_runner.go:130] > -16
	I0919 17:10:10.633250   28964 ops.go:34] apiserver oom_adj: -16
	I0919 17:10:10.633256   28964 kubeadm.go:640] restartCluster took 21.996781105s
	I0919 17:10:10.633263   28964 kubeadm.go:406] StartCluster complete in 22.052578935s
	I0919 17:10:10.633276   28964 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:10:10.633343   28964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:10:10.633895   28964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:10:10.634100   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:10:10.634241   28964 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:10:10.636551   28964 out.go:177] * Enabled addons: 
	I0919 17:10:10.634392   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:10:10.634433   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:10:10.637969   28964 addons.go:502] enable addons completed in 3.735726ms: enabled=[]
	I0919 17:10:10.638204   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:10:10.638512   28964 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 17:10:10.638526   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:10.638533   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:10.638542   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:10.642825   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:10.642841   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:10.642847   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:10.642853   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:10.642858   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:10.642863   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:10.642871   28964 round_trippers.go:580]     Content-Length: 291
	I0919 17:10:10.642881   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:10 GMT
	I0919 17:10:10.642894   28964 round_trippers.go:580]     Audit-Id: 132dd2a9-1c99-43ca-84be-20ec1d2f0c8d
	I0919 17:10:10.643031   28964 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"820","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0919 17:10:10.643218   28964 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553715" context rescaled to 1 replicas
	I0919 17:10:10.643250   28964 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:10:10.645683   28964 out.go:177] * Verifying Kubernetes components...
	I0919 17:10:10.647118   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:10:10.742430   28964 command_runner.go:130] > apiVersion: v1
	I0919 17:10:10.742449   28964 command_runner.go:130] > data:
	I0919 17:10:10.742454   28964 command_runner.go:130] >   Corefile: |
	I0919 17:10:10.742458   28964 command_runner.go:130] >     .:53 {
	I0919 17:10:10.742462   28964 command_runner.go:130] >         log
	I0919 17:10:10.742466   28964 command_runner.go:130] >         errors
	I0919 17:10:10.742470   28964 command_runner.go:130] >         health {
	I0919 17:10:10.742476   28964 command_runner.go:130] >            lameduck 5s
	I0919 17:10:10.742479   28964 command_runner.go:130] >         }
	I0919 17:10:10.742498   28964 command_runner.go:130] >         ready
	I0919 17:10:10.742506   28964 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0919 17:10:10.742510   28964 command_runner.go:130] >            pods insecure
	I0919 17:10:10.742522   28964 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0919 17:10:10.742527   28964 command_runner.go:130] >            ttl 30
	I0919 17:10:10.742533   28964 command_runner.go:130] >         }
	I0919 17:10:10.742539   28964 command_runner.go:130] >         prometheus :9153
	I0919 17:10:10.742546   28964 command_runner.go:130] >         hosts {
	I0919 17:10:10.742553   28964 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0919 17:10:10.742559   28964 command_runner.go:130] >            fallthrough
	I0919 17:10:10.742564   28964 command_runner.go:130] >         }
	I0919 17:10:10.742576   28964 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0919 17:10:10.742584   28964 command_runner.go:130] >            max_concurrent 1000
	I0919 17:10:10.742602   28964 command_runner.go:130] >         }
	I0919 17:10:10.742609   28964 command_runner.go:130] >         cache 30
	I0919 17:10:10.742617   28964 command_runner.go:130] >         loop
	I0919 17:10:10.742624   28964 command_runner.go:130] >         reload
	I0919 17:10:10.742630   28964 command_runner.go:130] >         loadbalance
	I0919 17:10:10.742636   28964 command_runner.go:130] >     }
	I0919 17:10:10.742641   28964 command_runner.go:130] > kind: ConfigMap
	I0919 17:10:10.742646   28964 command_runner.go:130] > metadata:
	I0919 17:10:10.742657   28964 command_runner.go:130] >   creationTimestamp: "2023-09-19T16:59:41Z"
	I0919 17:10:10.742664   28964 command_runner.go:130] >   name: coredns
	I0919 17:10:10.742673   28964 command_runner.go:130] >   namespace: kube-system
	I0919 17:10:10.742686   28964 command_runner.go:130] >   resourceVersion: "390"
	I0919 17:10:10.742697   28964 command_runner.go:130] >   uid: a0f116ef-660a-48dc-b415-9d01634b45c7
	I0919 17:10:10.744934   28964 node_ready.go:35] waiting up to 6m0s for node "multinode-553715" to be "Ready" ...
	I0919 17:10:10.745199   28964 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:10:10.815326   28964 request.go:629] Waited for 70.291664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:10.815396   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:10.815402   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:10.815409   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:10.815414   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:10.818240   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:10.818260   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:10.818267   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:10.818272   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:10.818286   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:10.818295   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:10.818302   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:10 GMT
	I0919 17:10:10.818310   28964 round_trippers.go:580]     Audit-Id: e1cd11ab-9bba-471f-9fd1-2ade622653c3
	I0919 17:10:10.818612   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:11.015227   28964 request.go:629] Waited for 196.286111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:11.015284   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:11.015289   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:11.015296   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:11.015307   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:11.020596   28964 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 17:10:11.020621   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:11.020631   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:11.020641   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:11.020648   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:10 GMT
	I0919 17:10:11.020654   28964 round_trippers.go:580]     Audit-Id: 2a87668f-f65b-4c39-92ea-d4207d73239f
	I0919 17:10:11.020659   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:11.020664   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:11.020882   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:11.521615   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:11.521636   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:11.521644   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:11.521650   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:11.524267   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:11.524292   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:11.524301   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:11.524310   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:11.524317   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:11.524323   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:11.524331   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:11 GMT
	I0919 17:10:11.524339   28964 round_trippers.go:580]     Audit-Id: a043a50f-c719-41ce-9a89-014fe1694e8c
	I0919 17:10:11.524485   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:12.022216   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:12.022239   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.022247   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.022252   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.025104   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:12.025121   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.025128   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.025134   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.025139   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.025144   28964 round_trippers.go:580]     Audit-Id: 49debb36-3659-4dd6-8fae-63b2a08cdd66
	I0919 17:10:12.025153   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.025158   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.025664   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"746","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0919 17:10:12.521825   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:12.521849   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.521857   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.521879   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.525199   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:12.525220   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.525230   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.525238   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.525246   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.525259   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.525266   28964 round_trippers.go:580]     Audit-Id: d141b1e0-5533-4592-ae6e-42bcb2c7a49e
	I0919 17:10:12.525275   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.525622   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:12.526011   28964 node_ready.go:49] node "multinode-553715" has status "Ready":"True"
	I0919 17:10:12.526029   28964 node_ready.go:38] duration metric: took 1.781074378s waiting for node "multinode-553715" to be "Ready" ...
	I0919 17:10:12.526037   28964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:10:12.526088   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:12.526096   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.526102   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.526108   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.529851   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:12.529878   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.529890   28964 round_trippers.go:580]     Audit-Id: 7dfefc1b-3062-448b-a8e0-108aa17b0cc4
	I0919 17:10:12.529898   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.529906   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.529914   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.529927   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.529935   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.531795   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"856"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82916 chars]
	I0919 17:10:12.535271   28964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:12.535348   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:12.535362   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.535373   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.535385   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.537494   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:12.537508   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.537518   28964 round_trippers.go:580]     Audit-Id: 1f224d60-0a40-4ced-951f-642890ae36bd
	I0919 17:10:12.537527   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.537535   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.537546   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.537554   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.537564   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.537788   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:12.538282   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:12.538296   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.538303   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.538309   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.540480   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:12.540499   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.540508   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.540516   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.540524   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.540531   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.540540   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.540547   28964 round_trippers.go:580]     Audit-Id: 825458b8-8fcf-40a4-95ae-1231c19ec042
	I0919 17:10:12.540744   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:12.541170   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:12.541183   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.541190   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.541195   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.542978   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:12.542992   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.542997   28964 round_trippers.go:580]     Audit-Id: 89c3efbc-7236-486b-9918-186ae04bdfdc
	I0919 17:10:12.543003   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.543015   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.543034   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.543052   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.543059   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.543281   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:12.615996   28964 request.go:629] Waited for 72.229038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:12.616070   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:12.616082   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:12.616089   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:12.616095   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:12.618860   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:12.618884   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:12.618891   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:12.618897   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:12 GMT
	I0919 17:10:12.618901   28964 round_trippers.go:580]     Audit-Id: d6eec934-b6c8-4398-8ded-a3b50bb20d82
	I0919 17:10:12.618906   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:12.618912   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:12.618919   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:12.619206   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:13.120014   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:13.120038   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:13.120047   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:13.120053   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:13.122695   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:13.122719   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:13.122729   28964 round_trippers.go:580]     Audit-Id: 06e10885-b6cb-47b4-b198-ac931bd8aa09
	I0919 17:10:13.122737   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:13.122746   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:13.122755   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:13.122763   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:13.122772   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:13 GMT
	I0919 17:10:13.123019   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:13.123464   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:13.123481   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:13.123491   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:13.123500   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:13.125640   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:13.125659   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:13.125669   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:13.125677   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:13.125684   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:13.125696   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:13 GMT
	I0919 17:10:13.125703   28964 round_trippers.go:580]     Audit-Id: 61245741-52cd-485a-b412-b024afa1a04a
	I0919 17:10:13.125714   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:13.125967   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:13.620681   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:13.620705   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:13.620713   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:13.620719   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:13.623553   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:13.623578   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:13.623597   28964 round_trippers.go:580]     Audit-Id: 5712bb3f-4954-49f4-9ba1-be1c7f7f20b5
	I0919 17:10:13.623604   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:13.623611   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:13.623619   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:13.623627   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:13.623639   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:13 GMT
	I0919 17:10:13.623840   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:13.624299   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:13.624313   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:13.624320   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:13.624329   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:13.626373   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:13.626389   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:13.626398   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:13 GMT
	I0919 17:10:13.626406   28964 round_trippers.go:580]     Audit-Id: b07fc849-18c3-4f75-a0c0-2f5ae822b4b9
	I0919 17:10:13.626415   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:13.626428   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:13.626438   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:13.626448   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:13.626565   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:14.120272   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:14.120295   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:14.120306   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:14.120315   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:14.123399   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:14.123423   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:14.123429   28964 round_trippers.go:580]     Audit-Id: 6695fa01-0153-4c3c-800c-1f4afb1bf512
	I0919 17:10:14.123435   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:14.123440   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:14.123445   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:14.123450   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:14.123455   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:14 GMT
	I0919 17:10:14.124286   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:14.124724   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:14.124736   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:14.124743   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:14.124749   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:14.127097   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:14.127117   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:14.127127   28964 round_trippers.go:580]     Audit-Id: 36d3cfce-0283-4cd9-9ba2-85945006271f
	I0919 17:10:14.127134   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:14.127141   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:14.127146   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:14.127151   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:14.127157   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:14 GMT
	I0919 17:10:14.127575   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:14.620493   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:14.620522   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:14.620533   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:14.620542   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:14.624939   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:14.624966   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:14.624973   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:14.624979   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:14.624984   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:14.624989   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:14.624994   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:14 GMT
	I0919 17:10:14.624999   28964 round_trippers.go:580]     Audit-Id: 5ab50b7f-9284-4003-b4b2-53cd0452e7eb
	I0919 17:10:14.626342   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:14.626768   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:14.626780   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:14.626787   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:14.626793   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:14.629007   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:14.629025   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:14.629030   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:14.629035   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:14.629040   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:14.629045   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:14 GMT
	I0919 17:10:14.629050   28964 round_trippers.go:580]     Audit-Id: 9472d600-ee88-4f7b-89d4-b8c4dac719fd
	I0919 17:10:14.629055   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:14.629301   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:14.629561   28964 pod_ready.go:102] pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace has status "Ready":"False"
	I0919 17:10:15.119715   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:15.119751   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:15.119762   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:15.119770   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:15.122553   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:15.122570   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:15.122576   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:15.122588   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:15.122600   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:15.122610   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:15.122621   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:15 GMT
	I0919 17:10:15.122629   28964 round_trippers.go:580]     Audit-Id: 63d8d503-884a-487d-9a73-18df4a928571
	I0919 17:10:15.122767   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:15.123214   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:15.123227   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:15.123234   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:15.123240   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:15.125594   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:15.125609   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:15.125615   28964 round_trippers.go:580]     Audit-Id: 0608c65d-698d-4b2a-87b6-333962d97776
	I0919 17:10:15.125620   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:15.125625   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:15.125630   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:15.125635   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:15.125640   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:15 GMT
	I0919 17:10:15.125807   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:15.620515   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:15.620535   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:15.620542   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:15.620549   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:15.623370   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:15.623393   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:15.623404   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:15.623412   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:15.623421   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:15 GMT
	I0919 17:10:15.623429   28964 round_trippers.go:580]     Audit-Id: f8f3fd72-a495-4872-9324-a2a3cd24c4e4
	I0919 17:10:15.623438   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:15.623447   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:15.623840   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"798","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0919 17:10:15.624383   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:15.624398   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:15.624422   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:15.624435   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:15.626977   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:15.626991   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:15.626997   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:15.627002   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:15 GMT
	I0919 17:10:15.627007   28964 round_trippers.go:580]     Audit-Id: 49cc50ef-0b3a-484b-913a-8ff59517923c
	I0919 17:10:15.627012   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:15.627020   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:15.627024   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:15.627372   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:16.119991   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:10:16.120013   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.120023   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.120029   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.123715   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:16.123736   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.123744   28964 round_trippers.go:580]     Audit-Id: c81079a8-9f0f-424a-adc9-65834ce11f50
	I0919 17:10:16.123750   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.123755   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.123759   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.123764   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.123772   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.125742   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0919 17:10:16.126250   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:16.126266   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.126274   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.126283   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.128972   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:16.128993   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.129002   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.129035   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.129056   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.129065   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.129076   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.129084   28964 round_trippers.go:580]     Audit-Id: 159aa347-c511-453d-9258-2a4374317916
	I0919 17:10:16.129587   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:16.129971   28964 pod_ready.go:92] pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:16.129991   28964 pod_ready.go:81] duration metric: took 3.594698145s waiting for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:16.129999   28964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:16.130046   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:16.130050   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.130060   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.130066   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.132128   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:16.132144   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.132153   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.132162   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.132170   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.132178   28964 round_trippers.go:580]     Audit-Id: c37599b1-c6ad-41e7-8c63-ba7910cab749
	I0919 17:10:16.132189   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.132199   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.132760   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:16.133087   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:16.133100   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.133110   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.133118   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.135053   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:16.135070   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.135079   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.135087   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.135094   28964 round_trippers.go:580]     Audit-Id: 3c9bf4e0-cbc1-4ffe-8085-590e7bd9d548
	I0919 17:10:16.135099   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.135104   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.135112   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.135733   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:16.136084   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:16.136097   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.136104   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.136110   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.138098   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:16.138114   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.138123   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.138133   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.138144   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.138154   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.138163   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.138179   28964 round_trippers.go:580]     Audit-Id: 6ac8ef6b-e8fe-467a-a3d1-a64d505587e8
	I0919 17:10:16.138360   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:16.215994   28964 request.go:629] Waited for 77.220558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:16.216055   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:16.216062   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.216071   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.216079   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.218895   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:16.218918   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.218927   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.218933   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.218938   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.218944   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.218949   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.218957   28964 round_trippers.go:580]     Audit-Id: 580cd737-a9d3-418e-b15a-371ceaf7ed48
	I0919 17:10:16.219554   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:16.720661   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:16.720684   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.720697   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.720706   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.723335   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:16.723355   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.723377   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.723387   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.723395   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.723409   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.723417   28964 round_trippers.go:580]     Audit-Id: b5437fde-29f7-41c3-ae64-bbca88aa9549
	I0919 17:10:16.723425   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.723808   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:16.724164   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:16.724176   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:16.724183   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:16.724191   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:16.726271   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:16.726287   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:16.726295   28964 round_trippers.go:580]     Audit-Id: a1b9a715-19d1-4bce-9fd2-abba1334909c
	I0919 17:10:16.726304   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:16.726312   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:16.726321   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:16.726334   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:16.726343   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:16 GMT
	I0919 17:10:16.726605   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:17.220687   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:17.220708   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:17.220715   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:17.220722   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:17.223365   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:17.223388   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:17.223398   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:17 GMT
	I0919 17:10:17.223408   28964 round_trippers.go:580]     Audit-Id: 435f6c14-097f-4443-b9f1-1876f17f5359
	I0919 17:10:17.223417   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:17.223426   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:17.223432   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:17.223440   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:17.223801   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:17.224145   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:17.224155   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:17.224162   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:17.224168   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:17.226234   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:17.226251   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:17.226261   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:17 GMT
	I0919 17:10:17.226270   28964 round_trippers.go:580]     Audit-Id: 27edd45d-ef08-4fe0-ac01-7569d572bb90
	I0919 17:10:17.226279   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:17.226288   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:17.226295   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:17.226304   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:17.226589   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:17.720192   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:17.720214   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:17.720222   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:17.720228   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:17.723084   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:17.723105   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:17.723116   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:17 GMT
	I0919 17:10:17.723125   28964 round_trippers.go:580]     Audit-Id: 70797675-759c-4040-ab99-61e6773968d4
	I0919 17:10:17.723133   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:17.723140   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:17.723147   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:17.723161   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:17.723683   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:17.724124   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:17.724142   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:17.724153   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:17.724163   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:17.727490   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:17.727507   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:17.727517   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:17.727525   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:17.727532   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:17.727540   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:17.727549   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:17 GMT
	I0919 17:10:17.727557   28964 round_trippers.go:580]     Audit-Id: b6a30f1d-804f-4c8e-ada5-17d72c227c2b
	I0919 17:10:17.728398   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:18.221058   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:18.221078   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:18.221086   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:18.221092   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:18.248492   28964 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0919 17:10:18.248520   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:18.248530   28964 round_trippers.go:580]     Audit-Id: 4b3553d6-5b01-4746-bd80-c666c461c9ba
	I0919 17:10:18.248538   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:18.248545   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:18.248553   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:18.248561   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:18.248568   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:18 GMT
	I0919 17:10:18.248756   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:18.249250   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:18.249263   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:18.249274   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:18.249287   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:18.252192   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:18.252210   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:18.252218   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:18.252227   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:18.252234   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:18.252241   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:18 GMT
	I0919 17:10:18.252249   28964 round_trippers.go:580]     Audit-Id: 61915f99-541e-4b30-be49-ca8b3c459da2
	I0919 17:10:18.252260   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:18.252421   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:18.252785   28964 pod_ready.go:102] pod "etcd-multinode-553715" in "kube-system" namespace has status "Ready":"False"
	I0919 17:10:18.720291   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:18.720311   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:18.720319   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:18.720325   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:18.724242   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:18.724266   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:18.724277   28964 round_trippers.go:580]     Audit-Id: d63e975f-1861-44a9-bc8c-0a251cdb97be
	I0919 17:10:18.724286   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:18.724292   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:18.724297   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:18.724302   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:18.724307   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:18 GMT
	I0919 17:10:18.724673   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:18.725138   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:18.725152   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:18.725165   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:18.725175   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:18.727065   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:18.727079   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:18.727086   28964 round_trippers.go:580]     Audit-Id: 002d4a34-11b3-412b-a5cb-215504bfdd05
	I0919 17:10:18.727091   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:18.727096   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:18.727101   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:18.727106   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:18.727111   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:18 GMT
	I0919 17:10:18.727428   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:19.220127   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:19.220149   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.220156   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.220163   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.224067   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:19.224089   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.224097   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.224105   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.224114   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.224122   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.224127   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.224133   28964 round_trippers.go:580]     Audit-Id: f6af70f6-9be4-446b-bc66-e71deb184258
	I0919 17:10:19.224270   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"804","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0919 17:10:19.224662   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:19.224675   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.224682   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.224688   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.227384   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:19.227401   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.227411   28964 round_trippers.go:580]     Audit-Id: 1e7a7257-b077-4834-8ec7-e03685746ab1
	I0919 17:10:19.227419   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.227426   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.227434   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.227441   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.227454   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.227686   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:19.720118   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:10:19.720140   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.720148   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.720154   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.722735   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:19.722755   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.722764   28964 round_trippers.go:580]     Audit-Id: 62bb4788-5b89-4aff-8ee7-d8e04114cf45
	I0919 17:10:19.722772   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.722777   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.722782   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.722787   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.722792   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.723278   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"890","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0919 17:10:19.723647   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:19.723660   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.723666   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.723672   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.725778   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:19.725798   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.725807   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.725815   28964 round_trippers.go:580]     Audit-Id: e71d3386-e4c4-40fa-b067-14e5cf0b1b5a
	I0919 17:10:19.725824   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.725842   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.725851   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.725858   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.726029   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:19.726385   28964 pod_ready.go:92] pod "etcd-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:19.726402   28964 pod_ready.go:81] duration metric: took 3.596397165s waiting for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:19.726418   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:19.726476   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553715
	I0919 17:10:19.726483   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.726492   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.726500   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.728647   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:19.728662   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.728671   28964 round_trippers.go:580]     Audit-Id: e623506e-4ae0-48d4-807d-4bec2c205887
	I0919 17:10:19.728682   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.728691   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.728703   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.728716   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.728728   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.729417   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553715","namespace":"kube-system","uid":"e2712b6a-6771-4fb1-9b6d-e50e10e45411","resourceVersion":"859","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.mirror":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.seen":"2023-09-19T16:59:41.749099288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0919 17:10:19.729817   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:19.729832   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.729842   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.729850   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.731613   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:19.731631   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.731641   28964 round_trippers.go:580]     Audit-Id: befb205b-0dfb-4421-97e0-1835d5de23db
	I0919 17:10:19.731650   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.731658   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.731670   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.731679   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.731688   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.731982   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:19.732251   28964 pod_ready.go:92] pod "kube-apiserver-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:19.732265   28964 pod_ready.go:81] duration metric: took 5.837285ms waiting for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:19.732276   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:19.732320   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553715
	I0919 17:10:19.732329   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.732339   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.732349   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.734144   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:19.734162   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.734171   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.734180   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.734195   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.734206   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.734217   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.734228   28964 round_trippers.go:580]     Audit-Id: fa19384b-b55b-4a1d-b626-6a3beb3a0902
	I0919 17:10:19.734532   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553715","namespace":"kube-system","uid":"56eb8685-d2ae-4f50-8da1-dca616585190","resourceVersion":"861","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.mirror":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.seen":"2023-09-19T16:59:41.749100351Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0919 17:10:19.816194   28964 request.go:629] Waited for 81.24172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:19.816269   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:19.816276   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:19.816289   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:19.816299   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:19.820193   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:19.820222   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:19.820230   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:19.820237   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:19 GMT
	I0919 17:10:19.820243   28964 round_trippers.go:580]     Audit-Id: 19950b5c-88f7-43b3-9990-a4deeed6a2e5
	I0919 17:10:19.820250   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:19.820256   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:19.820262   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:19.820980   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:19.821388   28964 pod_ready.go:92] pod "kube-controller-manager-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:19.821411   28964 pod_ready.go:81] duration metric: took 89.126376ms waiting for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:19.821425   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:20.015870   28964 request.go:629] Waited for 194.381063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:10:20.015929   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:10:20.015934   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:20.015942   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:20.015948   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:20.019326   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:20.019353   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:20.019363   28964 round_trippers.go:580]     Audit-Id: 9868f54b-d062-4fb4-8d63-e09882906c86
	I0919 17:10:20.019370   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:20.019377   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:20.019385   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:20.019393   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:20.019401   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:20 GMT
	I0919 17:10:20.020620   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"503","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0919 17:10:20.215316   28964 request.go:629] Waited for 194.284933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:10:20.215383   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:10:20.215389   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:20.215398   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:20.215404   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:20.218000   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:20.218023   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:20.218033   28964 round_trippers.go:580]     Audit-Id: f6fbc4e7-23a5-4d6c-9505-2e15816743f6
	I0919 17:10:20.218042   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:20.218050   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:20.218059   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:20.218067   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:20.218074   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:20 GMT
	I0919 17:10:20.218187   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8","resourceVersion":"733","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I0919 17:10:20.218530   28964 pod_ready.go:92] pod "kube-proxy-d5vl8" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:20.218553   28964 pod_ready.go:81] duration metric: took 397.11606ms waiting for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:20.218566   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:20.416017   28964 request.go:629] Waited for 197.380772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:10:20.416094   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:10:20.416101   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:20.416117   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:20.416128   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:20.418630   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:20.418650   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:20.418660   28964 round_trippers.go:580]     Audit-Id: 53b21b24-eb58-410a-9708-dca858e12780
	I0919 17:10:20.418669   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:20.418677   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:20.418686   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:20.418695   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:20.418707   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:20 GMT
	I0919 17:10:20.419407   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gnjwl","generateName":"kube-proxy-","namespace":"kube-system","uid":"86e13bd9-e0df-4a0b-b9a7-1746bb37c23b","resourceVersion":"708","creationTimestamp":"2023-09-19T17:01:27Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0919 17:10:20.616159   28964 request.go:629] Waited for 196.369316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:10:20.616231   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:10:20.616238   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:20.616250   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:20.616265   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:20.618978   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:20.618993   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:20.618999   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:20.619004   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:20.619010   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:20 GMT
	I0919 17:10:20.619018   28964 round_trippers.go:580]     Audit-Id: c800c483-07e1-4b9f-9ad2-ed7ce317567f
	I0919 17:10:20.619027   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:20.619036   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:20.619202   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m03","uid":"f3827816-de3c-418e-aa24-505b515ee53b","resourceVersion":"878","creationTimestamp":"2023-09-19T17:02:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0919 17:10:20.619538   28964 pod_ready.go:92] pod "kube-proxy-gnjwl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:20.619559   28964 pod_ready.go:81] duration metric: took 400.985035ms waiting for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:20.619569   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:20.815989   28964 request.go:629] Waited for 196.361178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:10:20.816037   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:10:20.816042   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:20.816050   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:20.816056   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:20.818899   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:20.818921   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:20.818931   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:20.818939   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:20.818966   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:20.818975   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:20.818983   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:20 GMT
	I0919 17:10:20.818994   28964 round_trippers.go:580]     Audit-Id: ea385125-c508-4065-b916-bc1cca1547bf
	I0919 17:10:20.819479   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvcz9","generateName":"kube-proxy-","namespace":"kube-system","uid":"377d6478-cda2-47b9-8af8-cff3064e8524","resourceVersion":"825","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0919 17:10:21.015211   28964 request.go:629] Waited for 195.292305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:21.015271   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:21.015277   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:21.015284   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:21.015290   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:21.018785   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:21.018807   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:21.018814   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:21.018820   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:21.018825   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:21.018830   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:21 GMT
	I0919 17:10:21.018838   28964 round_trippers.go:580]     Audit-Id: bd816e97-7d4b-4472-963d-4c86fda96988
	I0919 17:10:21.018847   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:21.019070   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:21.019485   28964 pod_ready.go:92] pod "kube-proxy-tvcz9" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:21.019509   28964 pod_ready.go:81] duration metric: took 399.930796ms waiting for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:21.019518   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:21.215935   28964 request.go:629] Waited for 196.358296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:10:21.216012   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:10:21.216020   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:21.216031   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:21.216047   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:21.219437   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:21.219459   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:21.219472   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:21.219479   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:21.219486   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:21 GMT
	I0919 17:10:21.219494   28964 round_trippers.go:580]     Audit-Id: 3e63554e-80fd-4a58-8c84-fe1fee7bdee9
	I0919 17:10:21.219501   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:21.219509   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:21.219786   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553715","namespace":"kube-system","uid":"27c15070-fba4-4237-b6d2-4727af1e5809","resourceVersion":"857","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.mirror":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.seen":"2023-09-19T16:59:41.749088169Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0919 17:10:21.415567   28964 request.go:629] Waited for 195.363892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:21.415627   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:10:21.415633   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:21.415640   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:21.415647   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:21.418571   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:10:21.418588   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:21.418595   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:21.418600   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:21.418606   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:21.418611   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:21 GMT
	I0919 17:10:21.418616   28964 round_trippers.go:580]     Audit-Id: 95fa1334-a7cf-494f-bdc7-12a670a24206
	I0919 17:10:21.418625   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:21.418788   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0919 17:10:21.419185   28964 pod_ready.go:92] pod "kube-scheduler-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:10:21.419202   28964 pod_ready.go:81] duration metric: took 399.675499ms waiting for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:10:21.419215   28964 pod_ready.go:38] duration metric: took 8.893169186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:10:21.419236   28964 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:10:21.419292   28964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:10:21.433457   28964 command_runner.go:130] > 1074
	I0919 17:10:21.433523   28964 api_server.go:72] duration metric: took 10.790242893s to wait for apiserver process to appear ...
	I0919 17:10:21.433540   28964 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:10:21.433559   28964 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:10:21.438315   28964 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0919 17:10:21.438370   28964 round_trippers.go:463] GET https://192.168.39.38:8443/version
	I0919 17:10:21.438378   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:21.438385   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:21.438393   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:21.439528   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:10:21.439541   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:21.439547   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:21.439552   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:21.439557   28964 round_trippers.go:580]     Content-Length: 263
	I0919 17:10:21.439562   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:21 GMT
	I0919 17:10:21.439574   28964 round_trippers.go:580]     Audit-Id: 582b012e-3c43-4c8d-bf47-280bd06ebb1a
	I0919 17:10:21.439587   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:21.439591   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:21.439652   28964 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0919 17:10:21.439705   28964 api_server.go:141] control plane version: v1.28.2
	I0919 17:10:21.439721   28964 api_server.go:131] duration metric: took 6.174009ms to wait for apiserver health ...
	I0919 17:10:21.439730   28964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:10:21.616101   28964 request.go:629] Waited for 176.308433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:21.616165   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:21.616170   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:21.616178   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:21.616183   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:21.620427   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:21.620441   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:21.620447   28964 round_trippers.go:580]     Audit-Id: 3a3a0584-57bc-4d4b-b14a-332d8c35e3f9
	I0919 17:10:21.620452   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:21.620460   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:21.620469   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:21.620480   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:21.620489   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:21 GMT
	I0919 17:10:21.621870   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"890"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I0919 17:10:21.624363   28964 system_pods.go:59] 12 kube-system pods found
	I0919 17:10:21.624383   28964 system_pods.go:61] "coredns-5dd5756b68-pffkm" [fbc226fb-43a9-4e0f-ac99-614f2740485d] Running
	I0919 17:10:21.624388   28964 system_pods.go:61] "etcd-multinode-553715" [905a0370-ab9d-4138-bd11-12297717f1c5] Running
	I0919 17:10:21.624394   28964 system_pods.go:61] "kindnet-ccllv" [efcfebd2-47e1-4d7f-8ca8-16dda13542e8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 17:10:21.624402   28964 system_pods.go:61] "kindnet-lmmc5" [2479ec2b-6cd3-4fb2-b85f-43b175cfbb79] Running
	I0919 17:10:21.624435   28964 system_pods.go:61] "kindnet-s8d6g" [e9d94488-d64b-437b-9f06-512b355c2598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 17:10:21.624447   28964 system_pods.go:61] "kube-apiserver-multinode-553715" [e2712b6a-6771-4fb1-9b6d-e50e10e45411] Running
	I0919 17:10:21.624455   28964 system_pods.go:61] "kube-controller-manager-multinode-553715" [56eb8685-d2ae-4f50-8da1-dca616585190] Running
	I0919 17:10:21.624465   28964 system_pods.go:61] "kube-proxy-d5vl8" [88ab05d6-264f-40d8-9c55-c58829613212] Running
	I0919 17:10:21.624473   28964 system_pods.go:61] "kube-proxy-gnjwl" [86e13bd9-e0df-4a0b-b9a7-1746bb37c23b] Running
	I0919 17:10:21.624477   28964 system_pods.go:61] "kube-proxy-tvcz9" [377d6478-cda2-47b9-8af8-cff3064e8524] Running
	I0919 17:10:21.624483   28964 system_pods.go:61] "kube-scheduler-multinode-553715" [27c15070-fba4-4237-b6d2-4727af1e5809] Running
	I0919 17:10:21.624487   28964 system_pods.go:61] "storage-provisioner" [6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8] Running
	I0919 17:10:21.624495   28964 system_pods.go:74] duration metric: took 184.756171ms to wait for pod list to return data ...
	I0919 17:10:21.624505   28964 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:10:21.815930   28964 request.go:629] Waited for 191.361547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I0919 17:10:21.815982   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/default/serviceaccounts
	I0919 17:10:21.815989   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:21.816012   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:21.816027   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:21.819057   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:21.819072   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:21.819079   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:21.819084   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:21.819089   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:21.819094   28964 round_trippers.go:580]     Content-Length: 261
	I0919 17:10:21.819099   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:21 GMT
	I0919 17:10:21.819104   28964 round_trippers.go:580]     Audit-Id: a00ad54d-8fab-4ee5-86f5-33f8c3a4b36f
	I0919 17:10:21.819109   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:21.819248   28964 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"890"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cfc0cbe8-f46b-4c2d-9338-ce249fe7510f","resourceVersion":"336","creationTimestamp":"2023-09-19T16:59:53Z"}}]}
	I0919 17:10:21.819423   28964 default_sa.go:45] found service account: "default"
	I0919 17:10:21.819437   28964 default_sa.go:55] duration metric: took 194.926718ms for default service account to be created ...
	I0919 17:10:21.819444   28964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:10:22.015854   28964 request.go:629] Waited for 196.34112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:22.015937   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:10:22.015948   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:22.015964   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:22.015978   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:22.020960   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:10:22.020981   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:22.020991   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:22.020999   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:22.021009   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:22.021016   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:22.021024   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:22 GMT
	I0919 17:10:22.021032   28964 round_trippers.go:580]     Audit-Id: 43550bcc-b89d-4f44-972b-cb0898ee7b11
	I0919 17:10:22.022810   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"890"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I0919 17:10:22.025185   28964 system_pods.go:86] 12 kube-system pods found
	I0919 17:10:22.025207   28964 system_pods.go:89] "coredns-5dd5756b68-pffkm" [fbc226fb-43a9-4e0f-ac99-614f2740485d] Running
	I0919 17:10:22.025215   28964 system_pods.go:89] "etcd-multinode-553715" [905a0370-ab9d-4138-bd11-12297717f1c5] Running
	I0919 17:10:22.025224   28964 system_pods.go:89] "kindnet-ccllv" [efcfebd2-47e1-4d7f-8ca8-16dda13542e8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 17:10:22.025232   28964 system_pods.go:89] "kindnet-lmmc5" [2479ec2b-6cd3-4fb2-b85f-43b175cfbb79] Running
	I0919 17:10:22.025243   28964 system_pods.go:89] "kindnet-s8d6g" [e9d94488-d64b-437b-9f06-512b355c2598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 17:10:22.025251   28964 system_pods.go:89] "kube-apiserver-multinode-553715" [e2712b6a-6771-4fb1-9b6d-e50e10e45411] Running
	I0919 17:10:22.025260   28964 system_pods.go:89] "kube-controller-manager-multinode-553715" [56eb8685-d2ae-4f50-8da1-dca616585190] Running
	I0919 17:10:22.025269   28964 system_pods.go:89] "kube-proxy-d5vl8" [88ab05d6-264f-40d8-9c55-c58829613212] Running
	I0919 17:10:22.025277   28964 system_pods.go:89] "kube-proxy-gnjwl" [86e13bd9-e0df-4a0b-b9a7-1746bb37c23b] Running
	I0919 17:10:22.025287   28964 system_pods.go:89] "kube-proxy-tvcz9" [377d6478-cda2-47b9-8af8-cff3064e8524] Running
	I0919 17:10:22.025295   28964 system_pods.go:89] "kube-scheduler-multinode-553715" [27c15070-fba4-4237-b6d2-4727af1e5809] Running
	I0919 17:10:22.025305   28964 system_pods.go:89] "storage-provisioner" [6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8] Running
	I0919 17:10:22.025314   28964 system_pods.go:126] duration metric: took 205.864491ms to wait for k8s-apps to be running ...
	I0919 17:10:22.025326   28964 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:10:22.025374   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:10:22.040037   28964 system_svc.go:56] duration metric: took 14.70627ms WaitForService to wait for kubelet.
	I0919 17:10:22.040055   28964 kubeadm.go:581] duration metric: took 11.396776125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:10:22.040079   28964 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:10:22.215848   28964 request.go:629] Waited for 175.701578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I0919 17:10:22.215928   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I0919 17:10:22.215939   28964 round_trippers.go:469] Request Headers:
	I0919 17:10:22.215954   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:10:22.215966   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:10:22.219132   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:10:22.219150   28964 round_trippers.go:577] Response Headers:
	I0919 17:10:22.219157   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:10:22.219162   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:10:22.219168   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:10:22.219181   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:10:22 GMT
	I0919 17:10:22.219190   28964 round_trippers.go:580]     Audit-Id: a4e58cf0-1618-4f97-96b8-89e72bf39134
	I0919 17:10:22.219199   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:10:22.219613   28964 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"890"},"items":[{"metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"856","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15074 chars]
	I0919 17:10:22.220141   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:10:22.220157   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:10:22.220165   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:10:22.220169   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:10:22.220173   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:10:22.220176   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:10:22.220180   28964 node_conditions.go:105] duration metric: took 180.096901ms to run NodePressure ...
	I0919 17:10:22.220190   28964 start.go:228] waiting for startup goroutines ...
	I0919 17:10:22.220201   28964 start.go:233] waiting for cluster config update ...
	I0919 17:10:22.220208   28964 start.go:242] writing updated cluster config ...
	I0919 17:10:22.220657   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:10:22.220736   28964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:10:22.223450   28964 out.go:177] * Starting worker node multinode-553715-m02 in cluster multinode-553715
	I0919 17:10:22.224773   28964 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:10:22.224795   28964 cache.go:57] Caching tarball of preloaded images
	I0919 17:10:22.224890   28964 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:10:22.224901   28964 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:10:22.224974   28964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:10:22.225123   28964 start.go:365] acquiring machines lock for multinode-553715-m02: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:10:22.225161   28964 start.go:369] acquired machines lock for "multinode-553715-m02" in 20.932µs
	I0919 17:10:22.225174   28964 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:10:22.225179   28964 fix.go:54] fixHost starting: m02
	I0919 17:10:22.225444   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:10:22.225472   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:10:22.239548   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0919 17:10:22.239959   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:10:22.240358   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:10:22.240379   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:10:22.240707   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:10:22.240876   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:10:22.240984   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetState
	I0919 17:10:22.242665   28964 fix.go:102] recreateIfNeeded on multinode-553715-m02: state=Running err=<nil>
	W0919 17:10:22.242684   28964 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:10:22.245678   28964 out.go:177] * Updating the running kvm2 "multinode-553715-m02" VM ...
	I0919 17:10:22.247043   28964 machine.go:88] provisioning docker machine ...
	I0919 17:10:22.247062   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:10:22.247280   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:10:22.247459   28964 buildroot.go:166] provisioning hostname "multinode-553715-m02"
	I0919 17:10:22.247476   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:10:22.247626   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:10:22.250204   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.250682   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:10:22.250714   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.251028   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:10:22.251196   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.251366   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.251505   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:10:22.251700   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:10:22.252055   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:10:22.252069   28964 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553715-m02 && echo "multinode-553715-m02" | sudo tee /etc/hostname
	I0919 17:10:22.379502   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553715-m02
	
	I0919 17:10:22.379532   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:10:22.382250   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.382599   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:10:22.382637   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.382823   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:10:22.383036   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.383189   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.383312   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:10:22.383461   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:10:22.383785   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:10:22.383816   28964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553715-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553715-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553715-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:10:22.501170   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:10:22.501192   28964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:10:22.501209   28964 buildroot.go:174] setting up certificates
	I0919 17:10:22.501216   28964 provision.go:83] configureAuth start
	I0919 17:10:22.501223   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetMachineName
	I0919 17:10:22.501491   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:10:22.503999   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.504355   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:10:22.504388   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.504574   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:10:22.506831   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.507164   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:10:22.507201   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.507371   28964 provision.go:138] copyHostCerts
	I0919 17:10:22.507399   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:10:22.507422   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:10:22.507434   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:10:22.507501   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:10:22.507565   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:10:22.507582   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:10:22.507589   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:10:22.507612   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:10:22.507654   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:10:22.507670   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:10:22.507676   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:10:22.507694   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:10:22.507740   28964 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.multinode-553715-m02 san=[192.168.39.11 192.168.39.11 localhost 127.0.0.1 minikube multinode-553715-m02]
	I0919 17:10:22.746998   28964 provision.go:172] copyRemoteCerts
	I0919 17:10:22.747054   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:10:22.747075   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:10:22.749809   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.750185   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:10:22.750220   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.750320   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:10:22.750531   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.750650   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:10:22.750865   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:10:22.837537   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 17:10:22.837608   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0919 17:10:22.861402   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 17:10:22.861476   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:10:22.885224   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 17:10:22.885293   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:10:22.908762   28964 provision.go:86] duration metric: configureAuth took 407.531281ms
	I0919 17:10:22.908789   28964 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:10:22.908979   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:10:22.909071   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:10:22.911778   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.912249   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:10:22.912285   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:10:22.912498   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:10:22.912731   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.912905   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:10:22.913058   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:10:22.913214   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:10:22.913585   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:10:22.913603   28964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:11:53.380232   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:11:53.380257   28964 machine.go:91] provisioned docker machine in 1m31.1332021s
	I0919 17:11:53.380268   28964 start.go:300] post-start starting for "multinode-553715-m02" (driver="kvm2")
	I0919 17:11:53.380278   28964 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:11:53.380295   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:11:53.380616   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:11:53.380654   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:11:53.383520   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.383930   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:11:53.383966   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.384134   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:11:53.384333   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:11:53.384500   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:11:53.384637   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:11:53.521845   28964 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:11:53.526182   28964 command_runner.go:130] > NAME=Buildroot
	I0919 17:11:53.526205   28964 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 17:11:53.526212   28964 command_runner.go:130] > ID=buildroot
	I0919 17:11:53.526220   28964 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 17:11:53.526227   28964 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 17:11:53.526263   28964 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:11:53.526280   28964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:11:53.526347   28964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:11:53.526415   28964 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:11:53.526425   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /etc/ssl/certs/132392.pem
	I0919 17:11:53.526500   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:11:53.534656   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:11:53.558878   28964 start.go:303] post-start completed in 178.598802ms
	I0919 17:11:53.558901   28964 fix.go:56] fixHost completed within 1m31.33372027s
	I0919 17:11:53.558925   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:11:53.562063   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.562431   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:11:53.562468   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.562727   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:11:53.563006   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:11:53.563200   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:11:53.563374   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:11:53.563535   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:11:53.563897   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0919 17:11:53.563917   28964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:11:53.685470   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695143513.676478743
	
	I0919 17:11:53.685492   28964 fix.go:206] guest clock: 1695143513.676478743
	I0919 17:11:53.685499   28964 fix.go:219] Guest: 2023-09-19 17:11:53.676478743 +0000 UTC Remote: 2023-09-19 17:11:53.558906972 +0000 UTC m=+449.433975452 (delta=117.571771ms)
	I0919 17:11:53.685513   28964 fix.go:190] guest clock delta is within tolerance: 117.571771ms
	I0919 17:11:53.685524   28964 start.go:83] releasing machines lock for "multinode-553715-m02", held for 1m31.460353702s
	I0919 17:11:53.685552   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:11:53.685775   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:11:53.688373   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.688738   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:11:53.688769   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.690913   28964 out.go:177] * Found network options:
	I0919 17:11:53.692590   28964 out.go:177]   - NO_PROXY=192.168.39.38
	W0919 17:11:53.694169   28964 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 17:11:53.694193   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:11:53.694744   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:11:53.694912   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:11:53.694983   28964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:11:53.695022   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	W0919 17:11:53.695089   28964 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 17:11:53.695146   28964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:11:53.695162   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:11:53.697574   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.697863   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.697955   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:11:53.697990   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.698135   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:11:53.698275   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:11:53.698285   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:11:53.698304   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:53.698455   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:11:53.698518   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:11:53.698586   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:11:53.698745   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:11:53.698897   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:11:53.699046   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:11:53.937296   28964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 17:11:53.937315   28964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 17:11:53.943209   28964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 17:11:53.943322   28964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:11:53.943387   28964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:11:53.951315   28964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 17:11:53.951348   28964 start.go:469] detecting cgroup driver to use...
	I0919 17:11:53.951477   28964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:11:53.964299   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:11:53.976529   28964 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:11:53.976587   28964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:11:53.989010   28964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:11:54.002115   28964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:11:54.144101   28964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:11:54.274478   28964 docker.go:212] disabling docker service ...
	I0919 17:11:54.274542   28964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:11:54.289718   28964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:11:54.304842   28964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:11:54.436178   28964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:11:54.565592   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:11:54.579503   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:11:54.596206   28964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 17:11:54.596244   28964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 17:11:54.596295   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:11:54.612572   28964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:11:54.612634   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:11:54.624249   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:11:54.635007   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:11:54.645838   28964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:11:54.656275   28964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:11:54.665482   28964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0919 17:11:54.665633   28964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:11:54.675209   28964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:11:54.809731   28964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:11:56.699782   28964 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.890016404s)
	I0919 17:11:56.699811   28964 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:11:56.699854   28964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:11:56.705048   28964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 17:11:56.705073   28964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 17:11:56.705084   28964 command_runner.go:130] > Device: 16h/22d	Inode: 1223        Links: 1
	I0919 17:11:56.705095   28964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:11:56.705103   28964 command_runner.go:130] > Access: 2023-09-19 17:11:56.595344495 +0000
	I0919 17:11:56.705112   28964 command_runner.go:130] > Modify: 2023-09-19 17:11:56.595344495 +0000
	I0919 17:11:56.705120   28964 command_runner.go:130] > Change: 2023-09-19 17:11:56.595344495 +0000
	I0919 17:11:56.705127   28964 command_runner.go:130] >  Birth: -
	I0919 17:11:56.705174   28964 start.go:537] Will wait 60s for crictl version
	I0919 17:11:56.705227   28964 ssh_runner.go:195] Run: which crictl
	I0919 17:11:56.709228   28964 command_runner.go:130] > /usr/bin/crictl
	I0919 17:11:56.709290   28964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:11:56.747130   28964 command_runner.go:130] > Version:  0.1.0
	I0919 17:11:56.747156   28964 command_runner.go:130] > RuntimeName:  cri-o
	I0919 17:11:56.747164   28964 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0919 17:11:56.747174   28964 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 17:11:56.748188   28964 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:11:56.748268   28964 ssh_runner.go:195] Run: crio --version
	I0919 17:11:56.800217   28964 command_runner.go:130] > crio version 1.24.1
	I0919 17:11:56.800237   28964 command_runner.go:130] > Version:          1.24.1
	I0919 17:11:56.800243   28964 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:11:56.800248   28964 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:11:56.800258   28964 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:11:56.800265   28964 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:11:56.800271   28964 command_runner.go:130] > Compiler:         gc
	I0919 17:11:56.800278   28964 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:11:56.800286   28964 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:11:56.800298   28964 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:11:56.800305   28964 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:11:56.800312   28964 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:11:56.800430   28964 ssh_runner.go:195] Run: crio --version
	I0919 17:11:56.851268   28964 command_runner.go:130] > crio version 1.24.1
	I0919 17:11:56.851349   28964 command_runner.go:130] > Version:          1.24.1
	I0919 17:11:56.851362   28964 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:11:56.851369   28964 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:11:56.851376   28964 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:11:56.851383   28964 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:11:56.851389   28964 command_runner.go:130] > Compiler:         gc
	I0919 17:11:56.851395   28964 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:11:56.851408   28964 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:11:56.851444   28964 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:11:56.851451   28964 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:11:56.851462   28964 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:11:56.853700   28964 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 17:11:56.855319   28964 out.go:177]   - env NO_PROXY=192.168.39.38
	I0919 17:11:56.856746   28964 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:11:56.859535   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:56.859930   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:11:56.859976   28964 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:11:56.860147   28964 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 17:11:56.864732   28964 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0919 17:11:56.864778   28964 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715 for IP: 192.168.39.11
	I0919 17:11:56.864802   28964 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:11:56.864964   28964 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:11:56.865021   28964 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:11:56.865101   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 17:11:56.865151   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 17:11:56.865174   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 17:11:56.865192   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 17:11:56.865266   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:11:56.865311   28964 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:11:56.865326   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:11:56.865362   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:11:56.865395   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:11:56.865430   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:11:56.865482   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:11:56.865522   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:11:56.865541   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem -> /usr/share/ca-certificates/13239.pem
	I0919 17:11:56.865560   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /usr/share/ca-certificates/132392.pem
	I0919 17:11:56.865983   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:11:56.891124   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:11:56.913544   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:11:56.939665   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:11:56.966122   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:11:56.991271   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:11:57.016033   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:11:57.040673   28964 ssh_runner.go:195] Run: openssl version
	I0919 17:11:57.046759   28964 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 17:11:57.046832   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:11:57.059579   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:11:57.064735   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:11:57.064927   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:11:57.064979   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:11:57.070610   28964 command_runner.go:130] > 51391683
	I0919 17:11:57.070964   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:11:57.081556   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:11:57.094537   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:11:57.099397   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:11:57.099591   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:11:57.099642   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:11:57.105313   28964 command_runner.go:130] > 3ec20f2e
	I0919 17:11:57.105589   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:11:57.116488   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:11:57.127476   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:11:57.132192   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:11:57.132384   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:11:57.132462   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:11:57.138776   28964 command_runner.go:130] > b5213941
	I0919 17:11:57.138846   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:11:57.149306   28964 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:11:57.153707   28964 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:11:57.153740   28964 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:11:57.153836   28964 ssh_runner.go:195] Run: crio config
	I0919 17:11:57.213897   28964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 17:11:57.213923   28964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 17:11:57.213933   28964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 17:11:57.213939   28964 command_runner.go:130] > #
	I0919 17:11:57.213949   28964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 17:11:57.213959   28964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 17:11:57.213970   28964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 17:11:57.213982   28964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 17:11:57.213989   28964 command_runner.go:130] > # reload'.
	I0919 17:11:57.214000   28964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 17:11:57.214011   28964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 17:11:57.214025   28964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 17:11:57.214037   28964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 17:11:57.214043   28964 command_runner.go:130] > [crio]
	I0919 17:11:57.214052   28964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 17:11:57.214058   28964 command_runner.go:130] > # containers images, in this directory.
	I0919 17:11:57.214351   28964 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 17:11:57.214371   28964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 17:11:57.214379   28964 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 17:11:57.214390   28964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 17:11:57.214401   28964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 17:11:57.214412   28964 command_runner.go:130] > storage_driver = "overlay"
	I0919 17:11:57.214421   28964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 17:11:57.214427   28964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 17:11:57.214434   28964 command_runner.go:130] > storage_option = [
	I0919 17:11:57.214571   28964 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 17:11:57.214679   28964 command_runner.go:130] > ]
	I0919 17:11:57.214695   28964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 17:11:57.214708   28964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 17:11:57.215979   28964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 17:11:57.215998   28964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 17:11:57.216008   28964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 17:11:57.216016   28964 command_runner.go:130] > # always happen on a node reboot
	I0919 17:11:57.216024   28964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 17:11:57.216032   28964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 17:11:57.216044   28964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 17:11:57.216067   28964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 17:11:57.216077   28964 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0919 17:11:57.216089   28964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 17:11:57.216102   28964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 17:11:57.216114   28964 command_runner.go:130] > # internal_wipe = true
	I0919 17:11:57.216127   28964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 17:11:57.216135   28964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 17:11:57.216142   28964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 17:11:57.216148   28964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 17:11:57.216161   28964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 17:11:57.216167   28964 command_runner.go:130] > [crio.api]
	I0919 17:11:57.216175   28964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 17:11:57.216182   28964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 17:11:57.216192   28964 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 17:11:57.216202   28964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 17:11:57.216214   28964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 17:11:57.216226   28964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 17:11:57.216237   28964 command_runner.go:130] > # stream_port = "0"
	I0919 17:11:57.216245   28964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 17:11:57.216255   28964 command_runner.go:130] > # stream_enable_tls = false
	I0919 17:11:57.216265   28964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 17:11:57.216273   28964 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 17:11:57.216288   28964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 17:11:57.216299   28964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 17:11:57.216306   28964 command_runner.go:130] > # minutes.
	I0919 17:11:57.216317   28964 command_runner.go:130] > # stream_tls_cert = ""
	I0919 17:11:57.216327   28964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 17:11:57.216339   28964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 17:11:57.216350   28964 command_runner.go:130] > # stream_tls_key = ""
	I0919 17:11:57.216363   28964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 17:11:57.216375   28964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 17:11:57.216389   28964 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 17:11:57.216400   28964 command_runner.go:130] > # stream_tls_ca = ""
	I0919 17:11:57.216423   28964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:11:57.216432   28964 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 17:11:57.216447   28964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:11:57.216460   28964 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 17:11:57.216501   28964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 17:11:57.216514   28964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 17:11:57.216521   28964 command_runner.go:130] > [crio.runtime]
	I0919 17:11:57.216534   28964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 17:11:57.216547   28964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 17:11:57.216557   28964 command_runner.go:130] > # "nofile=1024:2048"
	I0919 17:11:57.216571   28964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 17:11:57.216581   28964 command_runner.go:130] > # default_ulimits = [
	I0919 17:11:57.216591   28964 command_runner.go:130] > # ]
	I0919 17:11:57.216600   28964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 17:11:57.216608   28964 command_runner.go:130] > # no_pivot = false
	I0919 17:11:57.216616   28964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 17:11:57.216634   28964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 17:11:57.216646   28964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 17:11:57.216659   28964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 17:11:57.216670   28964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 17:11:57.216683   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:11:57.216694   28964 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 17:11:57.216703   28964 command_runner.go:130] > # Cgroup setting for conmon
	I0919 17:11:57.216718   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 17:11:57.216730   28964 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 17:11:57.216740   28964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 17:11:57.216752   28964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 17:11:57.216766   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:11:57.216774   28964 command_runner.go:130] > conmon_env = [
	I0919 17:11:57.216780   28964 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 17:11:57.216789   28964 command_runner.go:130] > ]
	I0919 17:11:57.216798   28964 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 17:11:57.216811   28964 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 17:11:57.216824   28964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 17:11:57.216834   28964 command_runner.go:130] > # default_env = [
	I0919 17:11:57.216840   28964 command_runner.go:130] > # ]
	I0919 17:11:57.216850   28964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 17:11:57.216859   28964 command_runner.go:130] > # selinux = false
	I0919 17:11:57.216866   28964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 17:11:57.216880   28964 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 17:11:57.216895   28964 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 17:11:57.216905   28964 command_runner.go:130] > # seccomp_profile = ""
	I0919 17:11:57.216918   28964 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 17:11:57.216930   28964 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 17:11:57.216943   28964 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 17:11:57.216955   28964 command_runner.go:130] > # which might increase security.
	I0919 17:11:57.216966   28964 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 17:11:57.216981   28964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 17:11:57.216994   28964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 17:11:57.217007   28964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 17:11:57.217021   28964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 17:11:57.217033   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:11:57.217041   28964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 17:11:57.217058   28964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 17:11:57.217066   28964 command_runner.go:130] > # the cgroup blockio controller.
	I0919 17:11:57.217077   28964 command_runner.go:130] > # blockio_config_file = ""
	I0919 17:11:57.217088   28964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 17:11:57.217099   28964 command_runner.go:130] > # irqbalance daemon.
	I0919 17:11:57.217112   28964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 17:11:57.217123   28964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 17:11:57.217135   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:11:57.217146   28964 command_runner.go:130] > # rdt_config_file = ""
	I0919 17:11:57.217155   28964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 17:11:57.217166   28964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 17:11:57.217180   28964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 17:11:57.217190   28964 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 17:11:57.217201   28964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 17:11:57.217215   28964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 17:11:57.217223   28964 command_runner.go:130] > # will be added.
	I0919 17:11:57.217233   28964 command_runner.go:130] > # default_capabilities = [
	I0919 17:11:57.217245   28964 command_runner.go:130] > # 	"CHOWN",
	I0919 17:11:57.217252   28964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 17:11:57.217262   28964 command_runner.go:130] > # 	"FSETID",
	I0919 17:11:57.217269   28964 command_runner.go:130] > # 	"FOWNER",
	I0919 17:11:57.217280   28964 command_runner.go:130] > # 	"SETGID",
	I0919 17:11:57.217291   28964 command_runner.go:130] > # 	"SETUID",
	I0919 17:11:57.217298   28964 command_runner.go:130] > # 	"SETPCAP",
	I0919 17:11:57.217309   28964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 17:11:57.217320   28964 command_runner.go:130] > # 	"KILL",
	I0919 17:11:57.217326   28964 command_runner.go:130] > # ]
	I0919 17:11:57.217341   28964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 17:11:57.217355   28964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:11:57.217365   28964 command_runner.go:130] > # default_sysctls = [
	I0919 17:11:57.217373   28964 command_runner.go:130] > # ]
	I0919 17:11:57.217382   28964 command_runner.go:130] > # List of devices on the host that a
	I0919 17:11:57.217398   28964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 17:11:57.217409   28964 command_runner.go:130] > # allowed_devices = [
	I0919 17:11:57.217417   28964 command_runner.go:130] > # 	"/dev/fuse",
	I0919 17:11:57.217426   28964 command_runner.go:130] > # ]
	I0919 17:11:57.217435   28964 command_runner.go:130] > # List of additional devices. specified as
	I0919 17:11:57.217451   28964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 17:11:57.217463   28964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 17:11:57.217521   28964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:11:57.217533   28964 command_runner.go:130] > # additional_devices = [
	I0919 17:11:57.217539   28964 command_runner.go:130] > # ]
	I0919 17:11:57.217548   28964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 17:11:57.217558   28964 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 17:11:57.217564   28964 command_runner.go:130] > # 	"/etc/cdi",
	I0919 17:11:57.217575   28964 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 17:11:57.217585   28964 command_runner.go:130] > # ]
	I0919 17:11:57.217595   28964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 17:11:57.217609   28964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 17:11:57.217619   28964 command_runner.go:130] > # Defaults to false.
	I0919 17:11:57.217630   28964 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 17:11:57.217640   28964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 17:11:57.217655   28964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 17:11:57.217662   28964 command_runner.go:130] > # hooks_dir = [
	I0919 17:11:57.217684   28964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 17:11:57.217709   28964 command_runner.go:130] > # ]
	I0919 17:11:57.217724   28964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 17:11:57.217735   28964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 17:11:57.217748   28964 command_runner.go:130] > # its default mounts from the following two files:
	I0919 17:11:57.217757   28964 command_runner.go:130] > #
	I0919 17:11:57.217768   28964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 17:11:57.217782   28964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 17:11:57.217795   28964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 17:11:57.217804   28964 command_runner.go:130] > #
	I0919 17:11:57.217815   28964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 17:11:57.217828   28964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 17:11:57.217842   28964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 17:11:57.217852   28964 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 17:11:57.217860   28964 command_runner.go:130] > #
	I0919 17:11:57.218149   28964 command_runner.go:130] > # default_mounts_file = ""
	I0919 17:11:57.218165   28964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 17:11:57.218177   28964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 17:11:57.218184   28964 command_runner.go:130] > pids_limit = 1024
	I0919 17:11:57.218195   28964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 17:11:57.218214   28964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 17:11:57.218225   28964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 17:11:57.218242   28964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 17:11:57.218252   28964 command_runner.go:130] > # log_size_max = -1
	I0919 17:11:57.218264   28964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0919 17:11:57.218274   28964 command_runner.go:130] > # log_to_journald = false
	I0919 17:11:57.218284   28964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 17:11:57.218311   28964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 17:11:57.218323   28964 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 17:11:57.218331   28964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 17:11:57.218345   28964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 17:11:57.218356   28964 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 17:11:57.218369   28964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 17:11:57.218379   28964 command_runner.go:130] > # read_only = false
	I0919 17:11:57.218389   28964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 17:11:57.218401   28964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 17:11:57.218412   28964 command_runner.go:130] > # live configuration reload.
	I0919 17:11:57.218421   28964 command_runner.go:130] > # log_level = "info"
	I0919 17:11:57.218434   28964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 17:11:57.218445   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:11:57.218453   28964 command_runner.go:130] > # log_filter = ""
	I0919 17:11:57.218468   28964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 17:11:57.218483   28964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 17:11:57.218494   28964 command_runner.go:130] > # separated by comma.
	I0919 17:11:57.218504   28964 command_runner.go:130] > # uid_mappings = ""
	I0919 17:11:57.218514   28964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 17:11:57.218527   28964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 17:11:57.218534   28964 command_runner.go:130] > # separated by comma.
	I0919 17:11:57.218543   28964 command_runner.go:130] > # gid_mappings = ""
	I0919 17:11:57.218552   28964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 17:11:57.218565   28964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:11:57.218579   28964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:11:57.218605   28964 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 17:11:57.218622   28964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 17:11:57.218632   28964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:11:57.218641   28964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:11:57.218653   28964 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 17:11:57.218663   28964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 17:11:57.218673   28964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 17:11:57.218687   28964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 17:11:57.218699   28964 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 17:11:57.218711   28964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 17:11:57.218724   28964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 17:11:57.218736   28964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 17:11:57.218746   28964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 17:11:57.218753   28964 command_runner.go:130] > drop_infra_ctr = false
	I0919 17:11:57.218767   28964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 17:11:57.218780   28964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 17:11:57.218795   28964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 17:11:57.218805   28964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 17:11:57.218814   28964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 17:11:57.218823   28964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 17:11:57.218831   28964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 17:11:57.218847   28964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 17:11:57.218873   28964 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 17:11:57.218887   28964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 17:11:57.218902   28964 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0919 17:11:57.218913   28964 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0919 17:11:57.218923   28964 command_runner.go:130] > # default_runtime = "runc"
	I0919 17:11:57.218932   28964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 17:11:57.218947   28964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 17:11:57.218966   28964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0919 17:11:57.218977   28964 command_runner.go:130] > # creation as a file is not desired either.
	I0919 17:11:57.218992   28964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 17:11:57.219005   28964 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 17:11:57.219013   28964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 17:11:57.219023   28964 command_runner.go:130] > # ]
	I0919 17:11:57.219033   28964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 17:11:57.219054   28964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 17:11:57.219070   28964 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0919 17:11:57.219081   28964 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0919 17:11:57.219090   28964 command_runner.go:130] > #
	I0919 17:11:57.219100   28964 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0919 17:11:57.219112   28964 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0919 17:11:57.219124   28964 command_runner.go:130] > #  runtime_type = "oci"
	I0919 17:11:57.219132   28964 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0919 17:11:57.219141   28964 command_runner.go:130] > #  privileged_without_host_devices = false
	I0919 17:11:57.219153   28964 command_runner.go:130] > #  allowed_annotations = []
	I0919 17:11:57.219160   28964 command_runner.go:130] > # Where:
	I0919 17:11:57.219171   28964 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0919 17:11:57.219186   28964 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0919 17:11:57.219201   28964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 17:11:57.219215   28964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 17:11:57.219225   28964 command_runner.go:130] > #   in $PATH.
	I0919 17:11:57.219236   28964 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0919 17:11:57.219248   28964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 17:11:57.219263   28964 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0919 17:11:57.219271   28964 command_runner.go:130] > #   state.
	I0919 17:11:57.219282   28964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 17:11:57.219296   28964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 17:11:57.219310   28964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 17:11:57.219323   28964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 17:11:57.219336   28964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 17:11:57.219351   28964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 17:11:57.219363   28964 command_runner.go:130] > #   The currently recognized values are:
	I0919 17:11:57.219378   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 17:11:57.219395   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 17:11:57.219408   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 17:11:57.219422   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 17:11:57.219438   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 17:11:57.219454   28964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 17:11:57.219468   28964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 17:11:57.219483   28964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0919 17:11:57.219495   28964 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 17:11:57.219527   28964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 17:11:57.219539   28964 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 17:11:57.219546   28964 command_runner.go:130] > runtime_type = "oci"
	I0919 17:11:57.219554   28964 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 17:11:57.219564   28964 command_runner.go:130] > runtime_config_path = ""
	I0919 17:11:57.219574   28964 command_runner.go:130] > monitor_path = ""
	I0919 17:11:57.219581   28964 command_runner.go:130] > monitor_cgroup = ""
	I0919 17:11:57.219590   28964 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 17:11:57.219600   28964 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0919 17:11:57.219605   28964 command_runner.go:130] > # running containers
	I0919 17:11:57.219609   28964 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0919 17:11:57.219618   28964 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0919 17:11:57.219642   28964 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0919 17:11:57.219651   28964 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0919 17:11:57.219656   28964 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0919 17:11:57.219660   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0919 17:11:57.219669   28964 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0919 17:11:57.219674   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0919 17:11:57.219681   28964 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0919 17:11:57.219686   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0919 17:11:57.219692   28964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 17:11:57.219699   28964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 17:11:57.219705   28964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 17:11:57.219713   28964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 17:11:57.219721   28964 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 17:11:57.219728   28964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 17:11:57.219736   28964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 17:11:57.219746   28964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 17:11:57.219752   28964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 17:11:57.219761   28964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 17:11:57.219765   28964 command_runner.go:130] > # Example:
	I0919 17:11:57.219771   28964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 17:11:57.219776   28964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 17:11:57.219783   28964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 17:11:57.219788   28964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 17:11:57.219793   28964 command_runner.go:130] > # cpuset = 0
	I0919 17:11:57.219798   28964 command_runner.go:130] > # cpushares = "0-1"
	I0919 17:11:57.219804   28964 command_runner.go:130] > # Where:
	I0919 17:11:57.219809   28964 command_runner.go:130] > # The workload name is workload-type.
	I0919 17:11:57.219816   28964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 17:11:57.219824   28964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 17:11:57.219830   28964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 17:11:57.219837   28964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 17:11:57.219845   28964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 17:11:57.219849   28964 command_runner.go:130] > # 
	I0919 17:11:57.219857   28964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 17:11:57.219861   28964 command_runner.go:130] > #
	I0919 17:11:57.219868   28964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 17:11:57.219875   28964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 17:11:57.219884   28964 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 17:11:57.219890   28964 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 17:11:57.219898   28964 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 17:11:57.219905   28964 command_runner.go:130] > [crio.image]
	I0919 17:11:57.219911   28964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 17:11:57.219918   28964 command_runner.go:130] > # default_transport = "docker://"
	I0919 17:11:57.219924   28964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 17:11:57.219932   28964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:11:57.219939   28964 command_runner.go:130] > # global_auth_file = ""
	I0919 17:11:57.219944   28964 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 17:11:57.219951   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:11:57.219956   28964 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0919 17:11:57.219965   28964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 17:11:57.219974   28964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:11:57.219982   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:11:57.219989   28964 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 17:11:57.219994   28964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 17:11:57.220002   28964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 17:11:57.220009   28964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 17:11:57.220017   28964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 17:11:57.220022   28964 command_runner.go:130] > # pause_command = "/pause"
	I0919 17:11:57.220031   28964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 17:11:57.220040   28964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 17:11:57.220072   28964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 17:11:57.220085   28964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 17:11:57.220094   28964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 17:11:57.220103   28964 command_runner.go:130] > # signature_policy = ""
	I0919 17:11:57.220109   28964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 17:11:57.220118   28964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 17:11:57.220124   28964 command_runner.go:130] > # changing them here.
	I0919 17:11:57.220129   28964 command_runner.go:130] > # insecure_registries = [
	I0919 17:11:57.220135   28964 command_runner.go:130] > # ]
	I0919 17:11:57.220141   28964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 17:11:57.220149   28964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 17:11:57.220154   28964 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 17:11:57.220159   28964 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 17:11:57.220164   28964 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 17:11:57.220170   28964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 17:11:57.220174   28964 command_runner.go:130] > # CNI plugins.
	I0919 17:11:57.220177   28964 command_runner.go:130] > [crio.network]
	I0919 17:11:57.220183   28964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 17:11:57.220191   28964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 17:11:57.220196   28964 command_runner.go:130] > # cni_default_network = ""
	I0919 17:11:57.220204   28964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 17:11:57.220208   28964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 17:11:57.220215   28964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 17:11:57.220219   28964 command_runner.go:130] > # plugin_dirs = [
	I0919 17:11:57.220226   28964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 17:11:57.220235   28964 command_runner.go:130] > # ]
	I0919 17:11:57.220243   28964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 17:11:57.220253   28964 command_runner.go:130] > [crio.metrics]
	I0919 17:11:57.220262   28964 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 17:11:57.220272   28964 command_runner.go:130] > enable_metrics = true
	I0919 17:11:57.220281   28964 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 17:11:57.220286   28964 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 17:11:57.220295   28964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 17:11:57.220301   28964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 17:11:57.220308   28964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 17:11:57.220313   28964 command_runner.go:130] > # metrics_collectors = [
	I0919 17:11:57.220317   28964 command_runner.go:130] > # 	"operations",
	I0919 17:11:57.220322   28964 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 17:11:57.220327   28964 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 17:11:57.220331   28964 command_runner.go:130] > # 	"operations_errors",
	I0919 17:11:57.220338   28964 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 17:11:57.220342   28964 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 17:11:57.220347   28964 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 17:11:57.220353   28964 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 17:11:57.220358   28964 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 17:11:57.220365   28964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 17:11:57.220369   28964 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 17:11:57.220377   28964 command_runner.go:130] > # 	"containers_oom_total",
	I0919 17:11:57.220383   28964 command_runner.go:130] > # 	"containers_oom",
	I0919 17:11:57.220388   28964 command_runner.go:130] > # 	"processes_defunct",
	I0919 17:11:57.220394   28964 command_runner.go:130] > # 	"operations_total",
	I0919 17:11:57.220398   28964 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 17:11:57.220421   28964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 17:11:57.220431   28964 command_runner.go:130] > # 	"operations_errors_total",
	I0919 17:11:57.220442   28964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 17:11:57.220452   28964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 17:11:57.220460   28964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 17:11:57.220464   28964 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 17:11:57.220471   28964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 17:11:57.220476   28964 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 17:11:57.220483   28964 command_runner.go:130] > # ]
	I0919 17:11:57.220488   28964 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 17:11:57.220495   28964 command_runner.go:130] > # metrics_port = 9090
	I0919 17:11:57.220501   28964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 17:11:57.220507   28964 command_runner.go:130] > # metrics_socket = ""
	I0919 17:11:57.220512   28964 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 17:11:57.220519   28964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 17:11:57.220527   28964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 17:11:57.220535   28964 command_runner.go:130] > # certificate on any modification event.
	I0919 17:11:57.220539   28964 command_runner.go:130] > # metrics_cert = ""
	I0919 17:11:57.220547   28964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 17:11:57.220555   28964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 17:11:57.220560   28964 command_runner.go:130] > # metrics_key = ""
	I0919 17:11:57.220568   28964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 17:11:57.220574   28964 command_runner.go:130] > [crio.tracing]
	I0919 17:11:57.220580   28964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 17:11:57.220586   28964 command_runner.go:130] > # enable_tracing = false
	I0919 17:11:57.220592   28964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 17:11:57.220599   28964 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 17:11:57.220604   28964 command_runner.go:130] > # Number of samples to collect per million spans.
	I0919 17:11:57.220611   28964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 17:11:57.220617   28964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 17:11:57.220624   28964 command_runner.go:130] > [crio.stats]
	I0919 17:11:57.220630   28964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 17:11:57.220640   28964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 17:11:57.220646   28964 command_runner.go:130] > # stats_collection_period = 0
	I0919 17:11:57.221256   28964 command_runner.go:130] ! time="2023-09-19 17:11:57.202015252Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0919 17:11:57.221279   28964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 17:11:57.221335   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:11:57.221348   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:11:57.221359   28964 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:11:57.221387   28964 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553715 NodeName:multinode-553715-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:11:57.221514   28964 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553715-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:11:57.221561   28964 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553715-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:11:57.221609   28964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:11:57.231602   28964 command_runner.go:130] > kubeadm
	I0919 17:11:57.231625   28964 command_runner.go:130] > kubectl
	I0919 17:11:57.231632   28964 command_runner.go:130] > kubelet
	I0919 17:11:57.231654   28964 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:11:57.231710   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0919 17:11:57.242099   28964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0919 17:11:57.259402   28964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:11:57.277136   28964 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0919 17:11:57.281380   28964 command_runner.go:130] > 192.168.39.38	control-plane.minikube.internal
	I0919 17:11:57.281444   28964 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:11:57.281735   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:11:57.281891   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:11:57.281932   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:11:57.297105   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0919 17:11:57.297599   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:11:57.298089   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:11:57.298117   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:11:57.298456   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:11:57.298653   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:11:57.298813   28964 start.go:304] JoinCluster: &{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:11:57.298917   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 17:11:57.298933   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:11:57.301879   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:11:57.302328   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:11:57.302363   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:11:57.302491   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:11:57.302667   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:11:57.302841   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:11:57.303016   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:11:57.485437   28964 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zlhvw3.ywb644mvzpdkulos --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:11:57.485989   28964 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:11:57.486029   28964 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:11:57.486420   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:11:57.486459   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:11:57.500910   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0919 17:11:57.501353   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:11:57.501822   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:11:57.501857   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:11:57.502274   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:11:57.502516   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:11:57.502742   28964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-553715-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0919 17:11:57.502763   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:11:57.506263   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:11:57.506805   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:11:57.506840   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:11:57.507019   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:11:57.507240   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:11:57.507397   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:11:57.507572   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:11:57.683582   28964 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0919 17:11:57.747652   28964 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-ccllv, kube-system/kube-proxy-d5vl8
	I0919 17:12:00.773556   28964 command_runner.go:130] > node/multinode-553715-m02 cordoned
	I0919 17:12:00.773581   28964 command_runner.go:130] > pod "busybox-5bc68d56bd-m9sw8" has DeletionTimestamp older than 1 seconds, skipping
	I0919 17:12:00.773587   28964 command_runner.go:130] > node/multinode-553715-m02 drained
	I0919 17:12:00.773606   28964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-553715-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.27084381s)
	I0919 17:12:00.773621   28964 node.go:108] successfully drained node "m02"
	I0919 17:12:00.773961   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:12:00.774149   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:12:00.774463   28964 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0919 17:12:00.774506   28964 round_trippers.go:463] DELETE https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:00.774514   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:00.774521   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:00.774526   28964 round_trippers.go:473]     Content-Type: application/json
	I0919 17:12:00.774532   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:00.788296   28964 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0919 17:12:00.788315   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:00.788324   28964 round_trippers.go:580]     Audit-Id: 57e158e1-cbcf-405d-b268-c4ccae7b49c0
	I0919 17:12:00.788333   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:00.788340   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:00.788348   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:00.788357   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:00.788366   28964 round_trippers.go:580]     Content-Length: 171
	I0919 17:12:00.788373   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:00 GMT
	I0919 17:12:00.788424   28964 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-553715-m02","kind":"nodes","uid":"f4ff193f-acf0-4278-be8c-5827dddd3ce8"}}
	I0919 17:12:00.788464   28964 node.go:124] successfully deleted node "m02"
	I0919 17:12:00.788479   28964 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:12:00.788506   28964 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:12:00.788530   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zlhvw3.ywb644mvzpdkulos --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553715-m02"
	I0919 17:12:00.839243   28964 command_runner.go:130] ! W0919 17:12:00.829764    2680 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0919 17:12:00.839274   28964 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0919 17:12:00.972403   28964 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0919 17:12:00.972475   28964 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0919 17:12:01.733701   28964 command_runner.go:130] > [preflight] Running pre-flight checks
	I0919 17:12:01.733730   28964 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0919 17:12:01.733743   28964 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0919 17:12:01.733757   28964 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:12:01.733767   28964 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:12:01.733775   28964 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 17:12:01.733785   28964 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0919 17:12:01.733793   28964 command_runner.go:130] > This node has joined the cluster:
	I0919 17:12:01.733809   28964 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0919 17:12:01.733818   28964 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0919 17:12:01.733828   28964 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0919 17:12:01.733849   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 17:12:01.998621   28964 start.go:306] JoinCluster complete in 4.699810013s
	I0919 17:12:01.998646   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:12:01.998652   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:12:01.998693   28964 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 17:12:02.005008   28964 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 17:12:02.005027   28964 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 17:12:02.005033   28964 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 17:12:02.005040   28964 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:12:02.005045   28964 command_runner.go:130] > Access: 2023-09-19 17:09:35.017363855 +0000
	I0919 17:12:02.005050   28964 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 17:12:02.005056   28964 command_runner.go:130] > Change: 2023-09-19 17:09:33.166363855 +0000
	I0919 17:12:02.005065   28964 command_runner.go:130] >  Birth: -
	I0919 17:12:02.005184   28964 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 17:12:02.005199   28964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 17:12:02.023594   28964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 17:12:02.457458   28964 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:12:02.457490   28964 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:12:02.457498   28964 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0919 17:12:02.457503   28964 command_runner.go:130] > daemonset.apps/kindnet configured
	I0919 17:12:02.457884   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:12:02.458103   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:12:02.458366   28964 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 17:12:02.458380   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.458390   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.458400   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.462114   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:02.462129   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.462136   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.462142   28964 round_trippers.go:580]     Content-Length: 291
	I0919 17:12:02.462147   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.462153   28964 round_trippers.go:580]     Audit-Id: d6a6d601-41cd-4b4e-a755-c24a2fe3e91d
	I0919 17:12:02.462158   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.462163   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.462172   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.462189   28964 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"888","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0919 17:12:02.462263   28964 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553715" context rescaled to 1 replicas
	I0919 17:12:02.462288   28964 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0919 17:12:02.464219   28964 out.go:177] * Verifying Kubernetes components...
	I0919 17:12:02.465632   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:12:02.479075   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:12:02.479368   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:12:02.479666   28964 node_ready.go:35] waiting up to 6m0s for node "multinode-553715-m02" to be "Ready" ...
	I0919 17:12:02.479744   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:02.479755   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.479765   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.479777   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.482309   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.482324   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.482331   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.482337   28964 round_trippers.go:580]     Audit-Id: 548cff2c-4f34-45a0-ac77-912bcc6e9713
	I0919 17:12:02.482342   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.482347   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.482352   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.482357   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.482503   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"1030d8e1-d1f3-42be-8dbe-2f7c852318bd","resourceVersion":"1040","creationTimestamp":"2023-09-19T17:12:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0919 17:12:02.482731   28964 node_ready.go:49] node "multinode-553715-m02" has status "Ready":"True"
	I0919 17:12:02.482742   28964 node_ready.go:38] duration metric: took 3.05626ms waiting for node "multinode-553715-m02" to be "Ready" ...
	I0919 17:12:02.482750   28964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:12:02.482798   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:12:02.482809   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.482818   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.482828   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.486591   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:02.486619   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.486626   28964 round_trippers.go:580]     Audit-Id: 86064fbb-a6e1-4c2b-9954-932125cfccca
	I0919 17:12:02.486635   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.486640   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.486651   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.486662   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.486667   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.488199   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1047"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82197 chars]
	I0919 17:12:02.490590   28964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.490647   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:12:02.490654   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.490661   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.490670   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.492908   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.492928   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.492937   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.492945   28964 round_trippers.go:580]     Audit-Id: f563f727-22ff-400c-ab8f-c71ed04bf410
	I0919 17:12:02.492952   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.492961   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.492973   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.492980   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.493269   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0919 17:12:02.493644   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:02.493655   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.493662   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.493668   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.495397   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:12:02.495417   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.495427   28964 round_trippers.go:580]     Audit-Id: e18a3859-cfd5-48d4-9466-c51c370e57e5
	I0919 17:12:02.495435   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.495443   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.495452   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.495460   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.495476   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.495604   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:12:02.495910   28964 pod_ready.go:92] pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:02.495926   28964 pod_ready.go:81] duration metric: took 5.317021ms waiting for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.495937   28964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.495991   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:12:02.496004   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.496014   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.496024   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.498046   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.498065   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.498073   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.498082   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.498090   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.498102   28964 round_trippers.go:580]     Audit-Id: a2759c2c-577c-417a-a9b7-b68a11f9c7c8
	I0919 17:12:02.498111   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.498122   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.498248   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"890","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0919 17:12:02.498594   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:02.498609   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.498616   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.498624   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.500208   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:12:02.500226   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.500234   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.500243   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.500256   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.500263   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.500275   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.500284   28964 round_trippers.go:580]     Audit-Id: d49f23ec-bf87-4a90-ad95-fe68d3c76423
	I0919 17:12:02.500397   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:12:02.500675   28964 pod_ready.go:92] pod "etcd-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:02.500689   28964 pod_ready.go:81] duration metric: took 4.743959ms waiting for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.500710   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.500764   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553715
	I0919 17:12:02.500774   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.500784   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.500794   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.502610   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:12:02.502628   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.502641   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.502651   28964 round_trippers.go:580]     Audit-Id: f33a889c-af5e-4985-b563-08b41cd45fcf
	I0919 17:12:02.502664   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.502675   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.502684   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.502696   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.503292   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553715","namespace":"kube-system","uid":"e2712b6a-6771-4fb1-9b6d-e50e10e45411","resourceVersion":"859","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.mirror":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.seen":"2023-09-19T16:59:41.749099288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0919 17:12:02.503621   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:02.503631   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.503638   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.503644   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.511141   28964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 17:12:02.511161   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.511171   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.511178   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.511187   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.511196   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.511209   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.511221   28964 round_trippers.go:580]     Audit-Id: 42c5c308-b94e-4987-b48e-4f45fcf48ab0
	I0919 17:12:02.511390   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:12:02.511716   28964 pod_ready.go:92] pod "kube-apiserver-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:02.511732   28964 pod_ready.go:81] duration metric: took 11.00886ms waiting for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.511741   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.511800   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553715
	I0919 17:12:02.511807   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.511814   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.511820   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.514087   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.514102   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.514109   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.514114   28964 round_trippers.go:580]     Audit-Id: e185206f-80fe-4eb8-b545-51cee0e40e79
	I0919 17:12:02.514119   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.514124   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.514129   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.514134   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.514263   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553715","namespace":"kube-system","uid":"56eb8685-d2ae-4f50-8da1-dca616585190","resourceVersion":"861","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.mirror":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.seen":"2023-09-19T16:59:41.749100351Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0919 17:12:02.514730   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:02.514745   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.514756   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.514765   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.516955   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.516974   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.516985   28964 round_trippers.go:580]     Audit-Id: 7fd18c85-6e81-49ea-80a9-3df79f057887
	I0919 17:12:02.516994   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.517001   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.517009   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.517023   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.517034   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.517232   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:12:02.517592   28964 pod_ready.go:92] pod "kube-controller-manager-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:02.517610   28964 pod_ready.go:81] duration metric: took 5.855755ms waiting for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.517622   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:02.679917   28964 request.go:629] Waited for 162.236328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:12:02.679996   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:12:02.680004   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.680019   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.680035   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.682811   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.682836   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.682846   28964 round_trippers.go:580]     Audit-Id: bd74a6d8-6eaa-4c87-ac7f-de12218fdacb
	I0919 17:12:02.682854   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.682862   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.682869   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.682877   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.682885   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.683091   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"983","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5882 chars]
	I0919 17:12:02.879873   28964 request.go:629] Waited for 196.309847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:02.879921   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:02.879926   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:02.879934   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:02.879940   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:02.882647   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:02.882666   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:02.882673   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:02.882681   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:02.882690   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:02.882698   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:02 GMT
	I0919 17:12:02.882708   28964 round_trippers.go:580]     Audit-Id: f96363ce-230f-40a1-88d1-afd9f7fd1d2c
	I0919 17:12:02.882717   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:02.883017   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"1030d8e1-d1f3-42be-8dbe-2f7c852318bd","resourceVersion":"1040","creationTimestamp":"2023-09-19T17:12:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0919 17:12:03.080722   28964 request.go:629] Waited for 197.412672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:12:03.080819   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:12:03.080830   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:03.080844   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:03.080857   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:03.084055   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:03.084079   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:03.084090   28964 round_trippers.go:580]     Audit-Id: 8d1782ef-9a3a-4062-8bbb-33cd95b9ff11
	I0919 17:12:03.084098   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:03.084106   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:03.084114   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:03.084122   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:03.084130   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:03 GMT
	I0919 17:12:03.084714   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"983","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5882 chars]
	I0919 17:12:03.280465   28964 request.go:629] Waited for 195.308864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:03.280535   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:03.280544   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:03.280551   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:03.280559   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:03.283380   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:03.283402   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:03.283412   28964 round_trippers.go:580]     Audit-Id: 74f3df73-039e-4893-8339-5dda7de058af
	I0919 17:12:03.283421   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:03.283429   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:03.283440   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:03.283452   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:03.283463   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:03 GMT
	I0919 17:12:03.283624   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"1030d8e1-d1f3-42be-8dbe-2f7c852318bd","resourceVersion":"1040","creationTimestamp":"2023-09-19T17:12:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0919 17:12:03.784670   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:12:03.784693   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:03.784701   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:03.784708   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:03.788135   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:03.788154   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:03.788164   28964 round_trippers.go:580]     Audit-Id: 51b5b753-ff88-45cc-b860-16f16eb138c7
	I0919 17:12:03.788172   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:03.788183   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:03.788195   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:03.788203   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:03.788209   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:03 GMT
	I0919 17:12:03.789071   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"1058","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0919 17:12:03.789549   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:12:03.789565   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:03.789576   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:03.789585   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:03.791609   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:03.791630   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:03.791639   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:03.791651   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:03.791660   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:03 GMT
	I0919 17:12:03.791671   28964 round_trippers.go:580]     Audit-Id: b12f99a2-37ba-4867-9725-0609b0b66fb1
	I0919 17:12:03.791682   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:03.791693   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:03.792056   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"1030d8e1-d1f3-42be-8dbe-2f7c852318bd","resourceVersion":"1040","creationTimestamp":"2023-09-19T17:12:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0919 17:12:03.792384   28964 pod_ready.go:92] pod "kube-proxy-d5vl8" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:03.792403   28964 pod_ready.go:81] duration metric: took 1.274768425s waiting for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:03.792446   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:03.880725   28964 request.go:629] Waited for 88.222671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:12:03.880791   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:12:03.880796   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:03.880804   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:03.880810   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:03.883748   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:03.883768   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:03.883774   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:03.883780   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:03.883785   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:03.883793   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:03 GMT
	I0919 17:12:03.883798   28964 round_trippers.go:580]     Audit-Id: 5e1e6b91-7efb-43c9-9b05-2a0bddd4433d
	I0919 17:12:03.883803   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:03.884333   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gnjwl","generateName":"kube-proxy-","namespace":"kube-system","uid":"86e13bd9-e0df-4a0b-b9a7-1746bb37c23b","resourceVersion":"708","creationTimestamp":"2023-09-19T17:01:27Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0919 17:12:04.080137   28964 request.go:629] Waited for 195.377424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:12:04.080205   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:12:04.080210   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:04.080218   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:04.080223   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:04.082672   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:04.082693   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:04.082700   28964 round_trippers.go:580]     Audit-Id: 6c8e220c-6822-4c88-b9fe-6d49b08d4081
	I0919 17:12:04.082707   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:04.082715   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:04.082723   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:04.082732   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:04.082741   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:04 GMT
	I0919 17:12:04.082921   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m03","uid":"f3827816-de3c-418e-aa24-505b515ee53b","resourceVersion":"878","creationTimestamp":"2023-09-19T17:02:09Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0919 17:12:04.083262   28964 pod_ready.go:92] pod "kube-proxy-gnjwl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:04.083284   28964 pod_ready.go:81] duration metric: took 290.82998ms waiting for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:04.083298   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:04.280238   28964 request.go:629] Waited for 196.867329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:12:04.280296   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:12:04.280302   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:04.280315   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:04.280328   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:04.283664   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:04.283693   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:04.283704   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:04.283712   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:04 GMT
	I0919 17:12:04.283721   28964 round_trippers.go:580]     Audit-Id: 32d7bcda-a4c2-4244-845e-c0507e4b9b6a
	I0919 17:12:04.283737   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:04.283749   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:04.283757   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:04.284125   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvcz9","generateName":"kube-proxy-","namespace":"kube-system","uid":"377d6478-cda2-47b9-8af8-cff3064e8524","resourceVersion":"825","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0919 17:12:04.479833   28964 request.go:629] Waited for 195.214052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:04.479896   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:04.479908   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:04.479922   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:04.479932   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:04.483674   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:04.483695   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:04.483702   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:04 GMT
	I0919 17:12:04.483707   28964 round_trippers.go:580]     Audit-Id: 2878b2c2-0dd1-4dc5-bc43-516fed57d72d
	I0919 17:12:04.483712   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:04.483717   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:04.483722   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:04.483727   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:04.484086   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:12:04.484387   28964 pod_ready.go:92] pod "kube-proxy-tvcz9" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:04.484401   28964 pod_ready.go:81] duration metric: took 401.095838ms waiting for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:04.484434   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:04.679818   28964 request.go:629] Waited for 195.314573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:12:04.679872   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:12:04.679877   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:04.679884   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:04.679891   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:04.683480   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:04.683504   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:04.683514   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:04.683526   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:04.683534   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:04 GMT
	I0919 17:12:04.683542   28964 round_trippers.go:580]     Audit-Id: c46bb358-28cf-4026-91ab-2630ae61cca7
	I0919 17:12:04.683554   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:04.683566   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:04.684273   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553715","namespace":"kube-system","uid":"27c15070-fba4-4237-b6d2-4727af1e5809","resourceVersion":"857","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.mirror":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.seen":"2023-09-19T16:59:41.749088169Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0919 17:12:04.880020   28964 request.go:629] Waited for 195.292049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:04.880073   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:12:04.880079   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:04.880090   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:04.880100   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:04.882837   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:12:04.882856   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:04.882863   28964 round_trippers.go:580]     Audit-Id: bd36e622-6886-4f40-8471-5eb63861cc9d
	I0919 17:12:04.882869   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:04.882874   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:04.882879   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:04.882884   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:04.882889   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:04 GMT
	I0919 17:12:04.883536   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:12:04.883841   28964 pod_ready.go:92] pod "kube-scheduler-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:12:04.883855   28964 pod_ready.go:81] duration metric: took 399.413138ms waiting for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:12:04.883866   28964 pod_ready.go:38] duration metric: took 2.401105689s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:12:04.883881   28964 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:12:04.883920   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:12:04.899343   28964 system_svc.go:56] duration metric: took 15.454355ms WaitForService to wait for kubelet.
	I0919 17:12:04.899369   28964 kubeadm.go:581] duration metric: took 2.437062075s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:12:04.899387   28964 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:12:05.080783   28964 request.go:629] Waited for 181.338301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I0919 17:12:05.080843   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I0919 17:12:05.080849   28964 round_trippers.go:469] Request Headers:
	I0919 17:12:05.080856   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:12:05.080863   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:12:05.084196   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:12:05.084223   28964 round_trippers.go:577] Response Headers:
	I0919 17:12:05.084239   28964 round_trippers.go:580]     Audit-Id: 4913cb6e-28f6-4ac3-b3e4-76d078d0e0c6
	I0919 17:12:05.084248   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:12:05.084256   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:12:05.084264   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:12:05.084273   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:12:05.084279   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:12:05 GMT
	I0919 17:12:05.085030   28964 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1063"},"items":[{"metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15104 chars]
	I0919 17:12:05.085607   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:12:05.085628   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:12:05.085637   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:12:05.085641   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:12:05.085644   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:12:05.085648   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:12:05.085659   28964 node_conditions.go:105] duration metric: took 186.268824ms to run NodePressure ...
	I0919 17:12:05.085676   28964 start.go:228] waiting for startup goroutines ...
	I0919 17:12:05.085695   28964 start.go:242] writing updated cluster config ...
	I0919 17:12:05.086157   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:12:05.086236   28964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:12:05.088469   28964 out.go:177] * Starting worker node multinode-553715-m03 in cluster multinode-553715
	I0919 17:12:05.089870   28964 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:12:05.089892   28964 cache.go:57] Caching tarball of preloaded images
	I0919 17:12:05.089991   28964 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:12:05.090005   28964 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:12:05.090093   28964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/config.json ...
	I0919 17:12:05.090261   28964 start.go:365] acquiring machines lock for multinode-553715-m03: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:12:05.090306   28964 start.go:369] acquired machines lock for "multinode-553715-m03" in 25.513µs
	I0919 17:12:05.090323   28964 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:12:05.090332   28964 fix.go:54] fixHost starting: m03
	I0919 17:12:05.090591   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:12:05.090649   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:12:05.105282   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0919 17:12:05.105769   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:12:05.106227   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:12:05.106247   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:12:05.106581   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:12:05.106770   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:12:05.106927   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetState
	I0919 17:12:05.108664   28964 fix.go:102] recreateIfNeeded on multinode-553715-m03: state=Running err=<nil>
	W0919 17:12:05.108683   28964 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:12:05.111527   28964 out.go:177] * Updating the running kvm2 "multinode-553715-m03" VM ...
	I0919 17:12:05.113032   28964 machine.go:88] provisioning docker machine ...
	I0919 17:12:05.113058   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:12:05.113300   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetMachineName
	I0919 17:12:05.113447   28964 buildroot.go:166] provisioning hostname "multinode-553715-m03"
	I0919 17:12:05.113469   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetMachineName
	I0919 17:12:05.113625   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:12:05.116252   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.116694   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:12:05.116730   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.116900   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:12:05.117057   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.117179   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.117270   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:12:05.117381   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:12:05.117875   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0919 17:12:05.117894   28964 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-553715-m03 && echo "multinode-553715-m03" | sudo tee /etc/hostname
	I0919 17:12:05.268384   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-553715-m03
	
	I0919 17:12:05.268440   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:12:05.271102   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.271454   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:12:05.271490   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.271607   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:12:05.271818   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.271988   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.272110   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:12:05.272255   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:12:05.272586   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0919 17:12:05.272614   28964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-553715-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-553715-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-553715-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:12:05.405429   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:12:05.405459   28964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:12:05.405476   28964 buildroot.go:174] setting up certificates
	I0919 17:12:05.405482   28964 provision.go:83] configureAuth start
	I0919 17:12:05.405491   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetMachineName
	I0919 17:12:05.405732   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetIP
	I0919 17:12:05.408259   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.408549   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:12:05.408572   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.408731   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:12:05.411050   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.411355   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:12:05.411384   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.411545   28964 provision.go:138] copyHostCerts
	I0919 17:12:05.411583   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:12:05.411618   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:12:05.411630   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:12:05.411708   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:12:05.411792   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:12:05.411817   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:12:05.411823   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:12:05.411862   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:12:05.411918   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:12:05.411940   28964 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:12:05.411949   28964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:12:05.411982   28964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:12:05.412074   28964 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.multinode-553715-m03 san=[192.168.39.229 192.168.39.229 localhost 127.0.0.1 minikube multinode-553715-m03]
	I0919 17:12:05.459269   28964 provision.go:172] copyRemoteCerts
	I0919 17:12:05.459317   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:12:05.459337   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:12:05.461600   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.461950   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:12:05.461985   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.462119   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:12:05.462313   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.462453   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:12:05.462591   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m03/id_rsa Username:docker}
	I0919 17:12:05.558418   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 17:12:05.558471   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:12:05.581519   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 17:12:05.581591   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0919 17:12:05.604072   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 17:12:05.604137   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:12:05.626697   28964 provision.go:86] duration metric: configureAuth took 221.201071ms
	I0919 17:12:05.626724   28964 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:12:05.626976   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:12:05.627042   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:12:05.629511   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.629945   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:12:05.629997   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:12:05.630199   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:12:05.630347   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.630538   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:12:05.630671   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:12:05.630829   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:12:05.631265   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0919 17:12:05.631288   28964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:13:36.195731   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:13:36.195756   28964 machine.go:91] provisioned docker machine in 1m31.082707295s
	I0919 17:13:36.195766   28964 start.go:300] post-start starting for "multinode-553715-m03" (driver="kvm2")
	I0919 17:13:36.195776   28964 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:13:36.195795   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:13:36.196140   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:13:36.196177   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:13:36.198826   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.199200   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:13:36.199236   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.199328   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:13:36.199500   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:13:36.199630   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:13:36.199778   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m03/id_rsa Username:docker}
	I0919 17:13:36.346783   28964 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:13:36.353323   28964 command_runner.go:130] > NAME=Buildroot
	I0919 17:13:36.353343   28964 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 17:13:36.353348   28964 command_runner.go:130] > ID=buildroot
	I0919 17:13:36.353353   28964 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 17:13:36.353357   28964 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 17:13:36.353646   28964 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:13:36.353669   28964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:13:36.353736   28964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:13:36.353831   28964 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:13:36.353846   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /etc/ssl/certs/132392.pem
	I0919 17:13:36.353952   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:13:36.386733   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:13:36.409772   28964 start.go:303] post-start completed in 213.988841ms
	I0919 17:13:36.409795   28964 fix.go:56] fixHost completed within 1m31.319463641s
	I0919 17:13:36.409815   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:13:36.412548   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.412960   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:13:36.412994   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.413114   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:13:36.413295   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:13:36.413421   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:13:36.413534   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:13:36.413674   28964 main.go:141] libmachine: Using SSH client type: native
	I0919 17:13:36.413973   28964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0919 17:13:36.413987   28964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:13:36.553336   28964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695143616.547594061
	
	I0919 17:13:36.553361   28964 fix.go:206] guest clock: 1695143616.547594061
	I0919 17:13:36.553370   28964 fix.go:219] Guest: 2023-09-19 17:13:36.547594061 +0000 UTC Remote: 2023-09-19 17:13:36.40979921 +0000 UTC m=+552.284867677 (delta=137.794851ms)
	I0919 17:13:36.553389   28964 fix.go:190] guest clock delta is within tolerance: 137.794851ms
	I0919 17:13:36.553395   28964 start.go:83] releasing machines lock for "multinode-553715-m03", held for 1m31.463077981s
	I0919 17:13:36.553418   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:13:36.553714   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetIP
	I0919 17:13:36.556378   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.556841   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:13:36.556881   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.558967   28964 out.go:177] * Found network options:
	I0919 17:13:36.560565   28964 out.go:177]   - NO_PROXY=192.168.39.38,192.168.39.11
	W0919 17:13:36.561847   28964 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 17:13:36.561866   28964 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 17:13:36.561879   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:13:36.562524   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:13:36.562744   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .DriverName
	I0919 17:13:36.562848   28964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:13:36.562885   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	W0919 17:13:36.562968   28964 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 17:13:36.563000   28964 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 17:13:36.563066   28964 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:13:36.563088   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHHostname
	I0919 17:13:36.565620   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.565976   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.566011   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:13:36.566038   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.566180   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:13:36.566386   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:13:36.566448   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:13:36.566475   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:36.566545   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:13:36.566617   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHPort
	I0919 17:13:36.566671   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m03/id_rsa Username:docker}
	I0919 17:13:36.566759   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHKeyPath
	I0919 17:13:36.566909   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetSSHUsername
	I0919 17:13:36.567044   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m03/id_rsa Username:docker}
	I0919 17:13:36.808639   28964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 17:13:36.808648   28964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 17:13:36.814664   28964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 17:13:36.814696   28964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:13:36.814739   28964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:13:36.823426   28964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 17:13:36.823448   28964 start.go:469] detecting cgroup driver to use...
	I0919 17:13:36.823505   28964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:13:36.840194   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:13:36.862681   28964 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:13:36.862738   28964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:13:36.876864   28964 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:13:36.889769   28964 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:13:37.031973   28964 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:13:37.183942   28964 docker.go:212] disabling docker service ...
	I0919 17:13:37.184017   28964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:13:37.201701   28964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:13:37.214823   28964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:13:37.353257   28964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:13:37.491270   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:13:37.503586   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:13:37.521360   28964 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0919 17:13:37.521393   28964 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 17:13:37.521451   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:13:37.531105   28964 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:13:37.531170   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:13:37.541386   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:13:37.550599   28964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:13:37.560190   28964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:13:37.570692   28964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:13:37.579814   28964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0919 17:13:37.580046   28964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:13:37.589455   28964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:13:37.711504   28964 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:13:39.721919   28964 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.010381623s)
	I0919 17:13:39.721949   28964 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:13:39.721993   28964 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:13:39.728524   28964 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0919 17:13:39.728548   28964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 17:13:39.728559   28964 command_runner.go:130] > Device: 16h/22d	Inode: 1189        Links: 1
	I0919 17:13:39.728570   28964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:13:39.728579   28964 command_runner.go:130] > Access: 2023-09-19 17:13:39.617604784 +0000
	I0919 17:13:39.728587   28964 command_runner.go:130] > Modify: 2023-09-19 17:13:39.617604784 +0000
	I0919 17:13:39.728594   28964 command_runner.go:130] > Change: 2023-09-19 17:13:39.617604784 +0000
	I0919 17:13:39.728600   28964 command_runner.go:130] >  Birth: -
	I0919 17:13:39.729016   28964 start.go:537] Will wait 60s for crictl version
	I0919 17:13:39.729071   28964 ssh_runner.go:195] Run: which crictl
	I0919 17:13:39.733359   28964 command_runner.go:130] > /usr/bin/crictl
	I0919 17:13:39.733678   28964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:13:39.783768   28964 command_runner.go:130] > Version:  0.1.0
	I0919 17:13:39.783793   28964 command_runner.go:130] > RuntimeName:  cri-o
	I0919 17:13:39.783800   28964 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0919 17:13:39.783808   28964 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 17:13:39.785222   28964 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:13:39.785294   28964 ssh_runner.go:195] Run: crio --version
	I0919 17:13:39.838065   28964 command_runner.go:130] > crio version 1.24.1
	I0919 17:13:39.838094   28964 command_runner.go:130] > Version:          1.24.1
	I0919 17:13:39.838104   28964 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:13:39.838111   28964 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:13:39.838123   28964 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:13:39.838130   28964 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:13:39.838136   28964 command_runner.go:130] > Compiler:         gc
	I0919 17:13:39.838143   28964 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:13:39.838152   28964 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:13:39.838164   28964 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:13:39.838171   28964 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:13:39.838178   28964 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:13:39.838308   28964 ssh_runner.go:195] Run: crio --version
	I0919 17:13:39.892550   28964 command_runner.go:130] > crio version 1.24.1
	I0919 17:13:39.892574   28964 command_runner.go:130] > Version:          1.24.1
	I0919 17:13:39.892589   28964 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0919 17:13:39.892594   28964 command_runner.go:130] > GitTreeState:     dirty
	I0919 17:13:39.892602   28964 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I0919 17:13:39.892606   28964 command_runner.go:130] > GoVersion:        go1.19.9
	I0919 17:13:39.892610   28964 command_runner.go:130] > Compiler:         gc
	I0919 17:13:39.892615   28964 command_runner.go:130] > Platform:         linux/amd64
	I0919 17:13:39.892620   28964 command_runner.go:130] > Linkmode:         dynamic
	I0919 17:13:39.892630   28964 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0919 17:13:39.892637   28964 command_runner.go:130] > SeccompEnabled:   true
	I0919 17:13:39.892643   28964 command_runner.go:130] > AppArmorEnabled:  false
	I0919 17:13:39.896349   28964 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 17:13:39.897767   28964 out.go:177]   - env NO_PROXY=192.168.39.38
	I0919 17:13:39.899258   28964 out.go:177]   - env NO_PROXY=192.168.39.38,192.168.39.11
	I0919 17:13:39.900686   28964 main.go:141] libmachine: (multinode-553715-m03) Calling .GetIP
	I0919 17:13:39.903153   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:39.903542   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:33:10", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:02:03 +0000 UTC Type:0 Mac:52:54:00:77:33:10 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-553715-m03 Clientid:01:52:54:00:77:33:10}
	I0919 17:13:39.903564   28964 main.go:141] libmachine: (multinode-553715-m03) DBG | domain multinode-553715-m03 has defined IP address 192.168.39.229 and MAC address 52:54:00:77:33:10 in network mk-multinode-553715
	I0919 17:13:39.903791   28964 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 17:13:39.908297   28964 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0919 17:13:39.908556   28964 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715 for IP: 192.168.39.229
	I0919 17:13:39.908591   28964 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:13:39.908719   28964 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:13:39.908754   28964 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:13:39.908766   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 17:13:39.908779   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 17:13:39.908792   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 17:13:39.908804   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 17:13:39.908850   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:13:39.908876   28964 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:13:39.908886   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:13:39.908907   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:13:39.908930   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:13:39.908951   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:13:39.908988   28964 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:13:39.909012   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> /usr/share/ca-certificates/132392.pem
	I0919 17:13:39.909035   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:13:39.909047   28964 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem -> /usr/share/ca-certificates/13239.pem
	I0919 17:13:39.909326   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:13:39.934744   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:13:39.960799   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:13:39.986753   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:13:40.011169   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:13:40.034561   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:13:40.056992   28964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:13:40.079298   28964 ssh_runner.go:195] Run: openssl version
	I0919 17:13:40.084724   28964 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 17:13:40.085180   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:13:40.095767   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:13:40.100251   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:13:40.100400   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:13:40.100460   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:13:40.109418   28964 command_runner.go:130] > 3ec20f2e
	I0919 17:13:40.110110   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:13:40.119698   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:13:40.131170   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:13:40.135659   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:13:40.135923   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:13:40.135964   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:13:40.141381   28964 command_runner.go:130] > b5213941
	I0919 17:13:40.141430   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:13:40.150383   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:13:40.160661   28964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:13:40.165246   28964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:13:40.165352   28964 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:13:40.165400   28964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:13:40.170985   28964 command_runner.go:130] > 51391683
	I0919 17:13:40.171208   28964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:13:40.180379   28964 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:13:40.184436   28964 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:13:40.184678   28964 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:13:40.184781   28964 ssh_runner.go:195] Run: crio config
	I0919 17:13:40.256371   28964 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0919 17:13:40.256402   28964 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0919 17:13:40.256425   28964 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0919 17:13:40.256431   28964 command_runner.go:130] > #
	I0919 17:13:40.256443   28964 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0919 17:13:40.256458   28964 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0919 17:13:40.256472   28964 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0919 17:13:40.256488   28964 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0919 17:13:40.256498   28964 command_runner.go:130] > # reload'.
	I0919 17:13:40.256512   28964 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0919 17:13:40.256527   28964 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0919 17:13:40.256540   28964 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0919 17:13:40.256552   28964 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0919 17:13:40.256562   28964 command_runner.go:130] > [crio]
	I0919 17:13:40.256574   28964 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0919 17:13:40.256586   28964 command_runner.go:130] > # containers images, in this directory.
	I0919 17:13:40.256597   28964 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0919 17:13:40.256610   28964 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0919 17:13:40.256620   28964 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0919 17:13:40.256630   28964 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0919 17:13:40.256639   28964 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0919 17:13:40.256652   28964 command_runner.go:130] > storage_driver = "overlay"
	I0919 17:13:40.256665   28964 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0919 17:13:40.256677   28964 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0919 17:13:40.256687   28964 command_runner.go:130] > storage_option = [
	I0919 17:13:40.256697   28964 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0919 17:13:40.256705   28964 command_runner.go:130] > ]
	I0919 17:13:40.256715   28964 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0919 17:13:40.256728   28964 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0919 17:13:40.256739   28964 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0919 17:13:40.256751   28964 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0919 17:13:40.256763   28964 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0919 17:13:40.256773   28964 command_runner.go:130] > # always happen on a node reboot
	I0919 17:13:40.256784   28964 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0919 17:13:40.256796   28964 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0919 17:13:40.256809   28964 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0919 17:13:40.256825   28964 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0919 17:13:40.256839   28964 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0919 17:13:40.256857   28964 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0919 17:13:40.256875   28964 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0919 17:13:40.256886   28964 command_runner.go:130] > # internal_wipe = true
	I0919 17:13:40.256899   28964 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0919 17:13:40.256913   28964 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0919 17:13:40.256926   28964 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0919 17:13:40.256938   28964 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0919 17:13:40.256951   28964 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0919 17:13:40.256958   28964 command_runner.go:130] > [crio.api]
	I0919 17:13:40.256970   28964 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0919 17:13:40.256981   28964 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0919 17:13:40.256994   28964 command_runner.go:130] > # IP address on which the stream server will listen.
	I0919 17:13:40.257039   28964 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0919 17:13:40.257064   28964 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0919 17:13:40.257072   28964 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0919 17:13:40.257083   28964 command_runner.go:130] > # stream_port = "0"
	I0919 17:13:40.257093   28964 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0919 17:13:40.257101   28964 command_runner.go:130] > # stream_enable_tls = false
	I0919 17:13:40.257115   28964 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0919 17:13:40.257126   28964 command_runner.go:130] > # stream_idle_timeout = ""
	I0919 17:13:40.257140   28964 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0919 17:13:40.257154   28964 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0919 17:13:40.257165   28964 command_runner.go:130] > # minutes.
	I0919 17:13:40.257176   28964 command_runner.go:130] > # stream_tls_cert = ""
	I0919 17:13:40.257190   28964 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0919 17:13:40.257203   28964 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0919 17:13:40.257213   28964 command_runner.go:130] > # stream_tls_key = ""
	I0919 17:13:40.257223   28964 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0919 17:13:40.257235   28964 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0919 17:13:40.257249   28964 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0919 17:13:40.257261   28964 command_runner.go:130] > # stream_tls_ca = ""
	I0919 17:13:40.257274   28964 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:13:40.257285   28964 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0919 17:13:40.257299   28964 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0919 17:13:40.257311   28964 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0919 17:13:40.257335   28964 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0919 17:13:40.257351   28964 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0919 17:13:40.257361   28964 command_runner.go:130] > [crio.runtime]
	I0919 17:13:40.257371   28964 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0919 17:13:40.257383   28964 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0919 17:13:40.257391   28964 command_runner.go:130] > # "nofile=1024:2048"
	I0919 17:13:40.257404   28964 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0919 17:13:40.257413   28964 command_runner.go:130] > # default_ulimits = [
	I0919 17:13:40.257419   28964 command_runner.go:130] > # ]
	I0919 17:13:40.257431   28964 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0919 17:13:40.257440   28964 command_runner.go:130] > # no_pivot = false
	I0919 17:13:40.257449   28964 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0919 17:13:40.257462   28964 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0919 17:13:40.257473   28964 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0919 17:13:40.257487   28964 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0919 17:13:40.257498   28964 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0919 17:13:40.257511   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:13:40.257523   28964 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0919 17:13:40.257531   28964 command_runner.go:130] > # Cgroup setting for conmon
	I0919 17:13:40.257546   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0919 17:13:40.257556   28964 command_runner.go:130] > conmon_cgroup = "pod"
	I0919 17:13:40.257570   28964 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0919 17:13:40.257580   28964 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0919 17:13:40.257595   28964 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0919 17:13:40.257603   28964 command_runner.go:130] > conmon_env = [
	I0919 17:13:40.257617   28964 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0919 17:13:40.257626   28964 command_runner.go:130] > ]
	I0919 17:13:40.257636   28964 command_runner.go:130] > # Additional environment variables to set for all the
	I0919 17:13:40.257649   28964 command_runner.go:130] > # containers. These are overridden if set in the
	I0919 17:13:40.257660   28964 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0919 17:13:40.257670   28964 command_runner.go:130] > # default_env = [
	I0919 17:13:40.257676   28964 command_runner.go:130] > # ]
	I0919 17:13:40.257688   28964 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0919 17:13:40.257727   28964 command_runner.go:130] > # selinux = false
	I0919 17:13:40.257740   28964 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0919 17:13:40.257754   28964 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0919 17:13:40.257768   28964 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0919 17:13:40.257778   28964 command_runner.go:130] > # seccomp_profile = ""
	I0919 17:13:40.257790   28964 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0919 17:13:40.257802   28964 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0919 17:13:40.257814   28964 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0919 17:13:40.257826   28964 command_runner.go:130] > # which might increase security.
	I0919 17:13:40.257837   28964 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0919 17:13:40.257852   28964 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0919 17:13:40.257864   28964 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0919 17:13:40.257877   28964 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0919 17:13:40.257891   28964 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0919 17:13:40.257905   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:13:40.257917   28964 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0919 17:13:40.257929   28964 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0919 17:13:40.257940   28964 command_runner.go:130] > # the cgroup blockio controller.
	I0919 17:13:40.257950   28964 command_runner.go:130] > # blockio_config_file = ""
	I0919 17:13:40.257963   28964 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0919 17:13:40.257973   28964 command_runner.go:130] > # irqbalance daemon.
	I0919 17:13:40.257981   28964 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0919 17:13:40.257995   28964 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0919 17:13:40.258008   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:13:40.258018   28964 command_runner.go:130] > # rdt_config_file = ""
	I0919 17:13:40.258030   28964 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0919 17:13:40.258040   28964 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0919 17:13:40.258055   28964 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0919 17:13:40.258065   28964 command_runner.go:130] > # separate_pull_cgroup = ""
	I0919 17:13:40.258076   28964 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0919 17:13:40.258090   28964 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0919 17:13:40.258101   28964 command_runner.go:130] > # will be added.
	I0919 17:13:40.258111   28964 command_runner.go:130] > # default_capabilities = [
	I0919 17:13:40.258120   28964 command_runner.go:130] > # 	"CHOWN",
	I0919 17:13:40.258131   28964 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0919 17:13:40.258139   28964 command_runner.go:130] > # 	"FSETID",
	I0919 17:13:40.258147   28964 command_runner.go:130] > # 	"FOWNER",
	I0919 17:13:40.258153   28964 command_runner.go:130] > # 	"SETGID",
	I0919 17:13:40.258164   28964 command_runner.go:130] > # 	"SETUID",
	I0919 17:13:40.258172   28964 command_runner.go:130] > # 	"SETPCAP",
	I0919 17:13:40.258179   28964 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0919 17:13:40.258184   28964 command_runner.go:130] > # 	"KILL",
	I0919 17:13:40.258192   28964 command_runner.go:130] > # ]
	I0919 17:13:40.258202   28964 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0919 17:13:40.258215   28964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:13:40.258226   28964 command_runner.go:130] > # default_sysctls = [
	I0919 17:13:40.258231   28964 command_runner.go:130] > # ]
	I0919 17:13:40.258240   28964 command_runner.go:130] > # List of devices on the host that a
	I0919 17:13:40.258249   28964 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0919 17:13:40.258261   28964 command_runner.go:130] > # allowed_devices = [
	I0919 17:13:40.258269   28964 command_runner.go:130] > # 	"/dev/fuse",
	I0919 17:13:40.258278   28964 command_runner.go:130] > # ]
	I0919 17:13:40.258288   28964 command_runner.go:130] > # List of additional devices. specified as
	I0919 17:13:40.258303   28964 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0919 17:13:40.258316   28964 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0919 17:13:40.258343   28964 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0919 17:13:40.258355   28964 command_runner.go:130] > # additional_devices = [
	I0919 17:13:40.258362   28964 command_runner.go:130] > # ]
	I0919 17:13:40.258369   28964 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0919 17:13:40.258380   28964 command_runner.go:130] > # cdi_spec_dirs = [
	I0919 17:13:40.258391   28964 command_runner.go:130] > # 	"/etc/cdi",
	I0919 17:13:40.258402   28964 command_runner.go:130] > # 	"/var/run/cdi",
	I0919 17:13:40.258413   28964 command_runner.go:130] > # ]
	I0919 17:13:40.258425   28964 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0919 17:13:40.258440   28964 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0919 17:13:40.258451   28964 command_runner.go:130] > # Defaults to false.
	I0919 17:13:40.258461   28964 command_runner.go:130] > # device_ownership_from_security_context = false
	I0919 17:13:40.258476   28964 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0919 17:13:40.258490   28964 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0919 17:13:40.258500   28964 command_runner.go:130] > # hooks_dir = [
	I0919 17:13:40.258508   28964 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0919 17:13:40.258516   28964 command_runner.go:130] > # ]
	I0919 17:13:40.258525   28964 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0919 17:13:40.258536   28964 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0919 17:13:40.258546   28964 command_runner.go:130] > # its default mounts from the following two files:
	I0919 17:13:40.258554   28964 command_runner.go:130] > #
	I0919 17:13:40.258565   28964 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0919 17:13:40.258574   28964 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0919 17:13:40.258585   28964 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0919 17:13:40.258594   28964 command_runner.go:130] > #
	I0919 17:13:40.258606   28964 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0919 17:13:40.258621   28964 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0919 17:13:40.258635   28964 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0919 17:13:40.258647   28964 command_runner.go:130] > #      only add mounts it finds in this file.
	I0919 17:13:40.258653   28964 command_runner.go:130] > #
	I0919 17:13:40.258663   28964 command_runner.go:130] > # default_mounts_file = ""
	I0919 17:13:40.258671   28964 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0919 17:13:40.258685   28964 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0919 17:13:40.258694   28964 command_runner.go:130] > pids_limit = 1024
	I0919 17:13:40.258704   28964 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0919 17:13:40.258748   28964 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0919 17:13:40.258763   28964 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0919 17:13:40.258779   28964 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0919 17:13:40.258790   28964 command_runner.go:130] > # log_size_max = -1
	I0919 17:13:40.258804   28964 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0919 17:13:40.258814   28964 command_runner.go:130] > # log_to_journald = false
	I0919 17:13:40.258828   28964 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0919 17:13:40.258841   28964 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0919 17:13:40.258853   28964 command_runner.go:130] > # Path to directory for container attach sockets.
	I0919 17:13:40.258866   28964 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0919 17:13:40.258876   28964 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0919 17:13:40.258887   28964 command_runner.go:130] > # bind_mount_prefix = ""
	I0919 17:13:40.258899   28964 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0919 17:13:40.258909   28964 command_runner.go:130] > # read_only = false
	I0919 17:13:40.258921   28964 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0919 17:13:40.258932   28964 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0919 17:13:40.258941   28964 command_runner.go:130] > # live configuration reload.
	I0919 17:13:40.258953   28964 command_runner.go:130] > # log_level = "info"
	I0919 17:13:40.258965   28964 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0919 17:13:40.258977   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:13:40.258987   28964 command_runner.go:130] > # log_filter = ""
	I0919 17:13:40.259004   28964 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0919 17:13:40.259017   28964 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0919 17:13:40.259026   28964 command_runner.go:130] > # separated by comma.
	I0919 17:13:40.259036   28964 command_runner.go:130] > # uid_mappings = ""
	I0919 17:13:40.259053   28964 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0919 17:13:40.259065   28964 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0919 17:13:40.259076   28964 command_runner.go:130] > # separated by comma.
	I0919 17:13:40.259086   28964 command_runner.go:130] > # gid_mappings = ""
	I0919 17:13:40.259098   28964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0919 17:13:40.259107   28964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:13:40.259115   28964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:13:40.259122   28964 command_runner.go:130] > # minimum_mappable_uid = -1
	I0919 17:13:40.259128   28964 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0919 17:13:40.259141   28964 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0919 17:13:40.259153   28964 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0919 17:13:40.259164   28964 command_runner.go:130] > # minimum_mappable_gid = -1
	I0919 17:13:40.259173   28964 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0919 17:13:40.259185   28964 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0919 17:13:40.259199   28964 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0919 17:13:40.259209   28964 command_runner.go:130] > # ctr_stop_timeout = 30
	I0919 17:13:40.259220   28964 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0919 17:13:40.259232   28964 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0919 17:13:40.259244   28964 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0919 17:13:40.259254   28964 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0919 17:13:40.259264   28964 command_runner.go:130] > drop_infra_ctr = false
	I0919 17:13:40.259274   28964 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0919 17:13:40.259286   28964 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0919 17:13:40.259299   28964 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0919 17:13:40.259310   28964 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0919 17:13:40.259319   28964 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0919 17:13:40.259331   28964 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0919 17:13:40.259341   28964 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0919 17:13:40.259356   28964 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0919 17:13:40.259367   28964 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0919 17:13:40.259377   28964 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0919 17:13:40.259390   28964 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0919 17:13:40.259404   28964 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0919 17:13:40.259414   28964 command_runner.go:130] > # default_runtime = "runc"
	I0919 17:13:40.259426   28964 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0919 17:13:40.259440   28964 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0919 17:13:40.259457   28964 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0919 17:13:40.259468   28964 command_runner.go:130] > # creation as a file is not desired either.
	I0919 17:13:40.259484   28964 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0919 17:13:40.259495   28964 command_runner.go:130] > # the hostname is being managed dynamically.
	I0919 17:13:40.259506   28964 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0919 17:13:40.259515   28964 command_runner.go:130] > # ]
	I0919 17:13:40.259529   28964 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0919 17:13:40.259540   28964 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0919 17:13:40.259549   28964 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0919 17:13:40.259582   28964 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0919 17:13:40.259588   28964 command_runner.go:130] > #
	I0919 17:13:40.259596   28964 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0919 17:13:40.259607   28964 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0919 17:13:40.259618   28964 command_runner.go:130] > #  runtime_type = "oci"
	I0919 17:13:40.259628   28964 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0919 17:13:40.259639   28964 command_runner.go:130] > #  privileged_without_host_devices = false
	I0919 17:13:40.259649   28964 command_runner.go:130] > #  allowed_annotations = []
	I0919 17:13:40.259658   28964 command_runner.go:130] > # Where:
	I0919 17:13:40.259670   28964 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0919 17:13:40.259684   28964 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0919 17:13:40.259697   28964 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0919 17:13:40.259710   28964 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0919 17:13:40.259719   28964 command_runner.go:130] > #   in $PATH.
	I0919 17:13:40.259731   28964 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0919 17:13:40.259743   28964 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0919 17:13:40.259756   28964 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0919 17:13:40.259766   28964 command_runner.go:130] > #   state.
	I0919 17:13:40.259780   28964 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0919 17:13:40.259793   28964 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0919 17:13:40.259807   28964 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0919 17:13:40.259819   28964 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0919 17:13:40.259831   28964 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0919 17:13:40.259843   28964 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0919 17:13:40.259850   28964 command_runner.go:130] > #   The currently recognized values are:
	I0919 17:13:40.259859   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0919 17:13:40.259869   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0919 17:13:40.259877   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0919 17:13:40.259886   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0919 17:13:40.259893   28964 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0919 17:13:40.259902   28964 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0919 17:13:40.259910   28964 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0919 17:13:40.259919   28964 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0919 17:13:40.259926   28964 command_runner.go:130] > #   should be moved to the container's cgroup
	I0919 17:13:40.259934   28964 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0919 17:13:40.259940   28964 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0919 17:13:40.259947   28964 command_runner.go:130] > runtime_type = "oci"
	I0919 17:13:40.259951   28964 command_runner.go:130] > runtime_root = "/run/runc"
	I0919 17:13:40.259958   28964 command_runner.go:130] > runtime_config_path = ""
	I0919 17:13:40.259962   28964 command_runner.go:130] > monitor_path = ""
	I0919 17:13:40.259968   28964 command_runner.go:130] > monitor_cgroup = ""
	I0919 17:13:40.259973   28964 command_runner.go:130] > monitor_exec_cgroup = ""
	I0919 17:13:40.259981   28964 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0919 17:13:40.259985   28964 command_runner.go:130] > # running containers
	I0919 17:13:40.259992   28964 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0919 17:13:40.260000   28964 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0919 17:13:40.260024   28964 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0919 17:13:40.260032   28964 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0919 17:13:40.260039   28964 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0919 17:13:40.260045   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0919 17:13:40.260056   28964 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0919 17:13:40.260063   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0919 17:13:40.260068   28964 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0919 17:13:40.260074   28964 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0919 17:13:40.260081   28964 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0919 17:13:40.260088   28964 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0919 17:13:40.260096   28964 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0919 17:13:40.260104   28964 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0919 17:13:40.260113   28964 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0919 17:13:40.260122   28964 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0919 17:13:40.260134   28964 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0919 17:13:40.260143   28964 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0919 17:13:40.260151   28964 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0919 17:13:40.260160   28964 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0919 17:13:40.260166   28964 command_runner.go:130] > # Example:
	I0919 17:13:40.260171   28964 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0919 17:13:40.260177   28964 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0919 17:13:40.260184   28964 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0919 17:13:40.260189   28964 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0919 17:13:40.260195   28964 command_runner.go:130] > # cpuset = 0
	I0919 17:13:40.260200   28964 command_runner.go:130] > # cpushares = "0-1"
	I0919 17:13:40.260205   28964 command_runner.go:130] > # Where:
	I0919 17:13:40.260212   28964 command_runner.go:130] > # The workload name is workload-type.
	I0919 17:13:40.260240   28964 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0919 17:13:40.260248   28964 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0919 17:13:40.260255   28964 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0919 17:13:40.260267   28964 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0919 17:13:40.260279   28964 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0919 17:13:40.260288   28964 command_runner.go:130] > # 
	I0919 17:13:40.260301   28964 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0919 17:13:40.260310   28964 command_runner.go:130] > #
	I0919 17:13:40.260322   28964 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0919 17:13:40.260336   28964 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0919 17:13:40.260349   28964 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0919 17:13:40.260362   28964 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0919 17:13:40.260372   28964 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0919 17:13:40.260378   28964 command_runner.go:130] > [crio.image]
	I0919 17:13:40.260384   28964 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0919 17:13:40.260391   28964 command_runner.go:130] > # default_transport = "docker://"
	I0919 17:13:40.260397   28964 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0919 17:13:40.260420   28964 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:13:40.260428   28964 command_runner.go:130] > # global_auth_file = ""
	I0919 17:13:40.260441   28964 command_runner.go:130] > # The image used to instantiate infra containers.
	I0919 17:13:40.260449   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:13:40.260455   28964 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0919 17:13:40.260465   28964 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0919 17:13:40.260472   28964 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0919 17:13:40.260479   28964 command_runner.go:130] > # This option supports live configuration reload.
	I0919 17:13:40.260485   28964 command_runner.go:130] > # pause_image_auth_file = ""
	I0919 17:13:40.260493   28964 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0919 17:13:40.260499   28964 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0919 17:13:40.260507   28964 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0919 17:13:40.260513   28964 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0919 17:13:40.260517   28964 command_runner.go:130] > # pause_command = "/pause"
	I0919 17:13:40.260523   28964 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0919 17:13:40.260530   28964 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0919 17:13:40.260538   28964 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0919 17:13:40.260544   28964 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0919 17:13:40.260552   28964 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0919 17:13:40.260557   28964 command_runner.go:130] > # signature_policy = ""
	I0919 17:13:40.260563   28964 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0919 17:13:40.260572   28964 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0919 17:13:40.260576   28964 command_runner.go:130] > # changing them here.
	I0919 17:13:40.260583   28964 command_runner.go:130] > # insecure_registries = [
	I0919 17:13:40.260586   28964 command_runner.go:130] > # ]
	I0919 17:13:40.260592   28964 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0919 17:13:40.260597   28964 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0919 17:13:40.260602   28964 command_runner.go:130] > # image_volumes = "mkdir"
	I0919 17:13:40.260609   28964 command_runner.go:130] > # Temporary directory to use for storing big files
	I0919 17:13:40.260614   28964 command_runner.go:130] > # big_files_temporary_dir = ""
	I0919 17:13:40.260620   28964 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0919 17:13:40.260626   28964 command_runner.go:130] > # CNI plugins.
	I0919 17:13:40.260630   28964 command_runner.go:130] > [crio.network]
	I0919 17:13:40.260638   28964 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0919 17:13:40.260644   28964 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0919 17:13:40.260650   28964 command_runner.go:130] > # cni_default_network = ""
	I0919 17:13:40.260656   28964 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0919 17:13:40.260663   28964 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0919 17:13:40.260673   28964 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0919 17:13:40.260683   28964 command_runner.go:130] > # plugin_dirs = [
	I0919 17:13:40.260690   28964 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0919 17:13:40.260699   28964 command_runner.go:130] > # ]
	I0919 17:13:40.260708   28964 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0919 17:13:40.260717   28964 command_runner.go:130] > [crio.metrics]
	I0919 17:13:40.260726   28964 command_runner.go:130] > # Globally enable or disable metrics support.
	I0919 17:13:40.260735   28964 command_runner.go:130] > enable_metrics = true
	I0919 17:13:40.260746   28964 command_runner.go:130] > # Specify enabled metrics collectors.
	I0919 17:13:40.260757   28964 command_runner.go:130] > # Per default all metrics are enabled.
	I0919 17:13:40.260767   28964 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0919 17:13:40.260775   28964 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0919 17:13:40.260785   28964 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0919 17:13:40.260792   28964 command_runner.go:130] > # metrics_collectors = [
	I0919 17:13:40.260796   28964 command_runner.go:130] > # 	"operations",
	I0919 17:13:40.260806   28964 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0919 17:13:40.260817   28964 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0919 17:13:40.260828   28964 command_runner.go:130] > # 	"operations_errors",
	I0919 17:13:40.260835   28964 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0919 17:13:40.260845   28964 command_runner.go:130] > # 	"image_pulls_by_name",
	I0919 17:13:40.260854   28964 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0919 17:13:40.260865   28964 command_runner.go:130] > # 	"image_pulls_failures",
	I0919 17:13:40.260875   28964 command_runner.go:130] > # 	"image_pulls_successes",
	I0919 17:13:40.260881   28964 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0919 17:13:40.260888   28964 command_runner.go:130] > # 	"image_layer_reuse",
	I0919 17:13:40.260892   28964 command_runner.go:130] > # 	"containers_oom_total",
	I0919 17:13:40.260899   28964 command_runner.go:130] > # 	"containers_oom",
	I0919 17:13:40.260903   28964 command_runner.go:130] > # 	"processes_defunct",
	I0919 17:13:40.260907   28964 command_runner.go:130] > # 	"operations_total",
	I0919 17:13:40.260935   28964 command_runner.go:130] > # 	"operations_latency_seconds",
	I0919 17:13:40.260942   28964 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0919 17:13:40.260948   28964 command_runner.go:130] > # 	"operations_errors_total",
	I0919 17:13:40.260958   28964 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0919 17:13:40.260969   28964 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0919 17:13:40.260976   28964 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0919 17:13:40.260986   28964 command_runner.go:130] > # 	"image_pulls_success_total",
	I0919 17:13:40.260993   28964 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0919 17:13:40.261004   28964 command_runner.go:130] > # 	"containers_oom_count_total",
	I0919 17:13:40.261013   28964 command_runner.go:130] > # ]
	I0919 17:13:40.261026   28964 command_runner.go:130] > # The port on which the metrics server will listen.
	I0919 17:13:40.261036   28964 command_runner.go:130] > # metrics_port = 9090
	I0919 17:13:40.261048   28964 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0919 17:13:40.261062   28964 command_runner.go:130] > # metrics_socket = ""
	I0919 17:13:40.261074   28964 command_runner.go:130] > # The certificate for the secure metrics server.
	I0919 17:13:40.261087   28964 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0919 17:13:40.261096   28964 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0919 17:13:40.261101   28964 command_runner.go:130] > # certificate on any modification event.
	I0919 17:13:40.261107   28964 command_runner.go:130] > # metrics_cert = ""
	I0919 17:13:40.261112   28964 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0919 17:13:40.261119   28964 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0919 17:13:40.261126   28964 command_runner.go:130] > # metrics_key = ""
	I0919 17:13:40.261132   28964 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0919 17:13:40.261138   28964 command_runner.go:130] > [crio.tracing]
	I0919 17:13:40.261144   28964 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0919 17:13:40.261151   28964 command_runner.go:130] > # enable_tracing = false
	I0919 17:13:40.261163   28964 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0919 17:13:40.261174   28964 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0919 17:13:40.261186   28964 command_runner.go:130] > # Number of samples to collect per million spans.
	I0919 17:13:40.261197   28964 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0919 17:13:40.261210   28964 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0919 17:13:40.261219   28964 command_runner.go:130] > [crio.stats]
	I0919 17:13:40.261232   28964 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0919 17:13:40.261243   28964 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0919 17:13:40.261254   28964 command_runner.go:130] > # stats_collection_period = 0
	I0919 17:13:40.261557   28964 command_runner.go:130] ! time="2023-09-19 17:13:40.245889720Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0919 17:13:40.261585   28964 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0919 17:13:40.261764   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:13:40.261780   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:13:40.261791   28964 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:13:40.261813   28964 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-553715 NodeName:multinode-553715-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:13:40.261959   28964 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-553715-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:13:40.262007   28964 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-553715-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:13:40.262054   28964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:13:40.271940   28964 command_runner.go:130] > kubeadm
	I0919 17:13:40.271957   28964 command_runner.go:130] > kubectl
	I0919 17:13:40.271963   28964 command_runner.go:130] > kubelet
	I0919 17:13:40.272103   28964 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:13:40.272167   28964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0919 17:13:40.281527   28964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0919 17:13:40.297760   28964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:13:40.313484   28964 ssh_runner.go:195] Run: grep 192.168.39.38	control-plane.minikube.internal$ /etc/hosts
	I0919 17:13:40.317084   28964 command_runner.go:130] > 192.168.39.38	control-plane.minikube.internal
	I0919 17:13:40.317333   28964 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:13:40.317617   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:13:40.317650   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:13:40.317657   28964 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:13:40.332952   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0919 17:13:40.333354   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:13:40.333739   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:13:40.333762   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:13:40.334105   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:13:40.334303   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:13:40.334464   28964 start.go:304] JoinCluster: &{Name:multinode-553715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-553715 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.38 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.11 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:13:40.334591   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 17:13:40.334609   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:13:40.337228   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:13:40.337694   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:13:40.337726   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:13:40.337849   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:13:40.338017   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:13:40.338157   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:13:40.338295   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:13:40.524742   28964 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token i18qj6.6b1zwbeav83f7xxc --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:13:40.533630   28964 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 17:13:40.533673   28964 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:13:40.533974   28964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:13:40.534023   28964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:13:40.547966   28964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44135
	I0919 17:13:40.548379   28964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:13:40.548846   28964 main.go:141] libmachine: Using API Version  1
	I0919 17:13:40.548868   28964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:13:40.549172   28964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:13:40.549328   28964 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:13:40.549487   28964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-553715-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0919 17:13:40.549514   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:13:40.552121   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:13:40.552566   28964 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:13:40.552596   28964 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:13:40.552721   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:13:40.552892   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:13:40.553034   28964 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:13:40.553157   28964 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:13:40.765835   28964 command_runner.go:130] > node/multinode-553715-m03 cordoned
	I0919 17:13:43.804730   28964 command_runner.go:130] > pod "busybox-5bc68d56bd-fs98x" has DeletionTimestamp older than 1 seconds, skipping
	I0919 17:13:43.804757   28964 command_runner.go:130] > node/multinode-553715-m03 drained
	I0919 17:13:43.807311   28964 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0919 17:13:43.807341   28964 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-s8d6g, kube-system/kube-proxy-gnjwl
	I0919 17:13:43.807362   28964 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-553715-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.257853936s)
	I0919 17:13:43.807374   28964 node.go:108] successfully drained node "m03"
	I0919 17:13:43.807763   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:13:43.808064   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:13:43.808401   28964 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0919 17:13:43.808474   28964 round_trippers.go:463] DELETE https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:13:43.808483   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:43.808494   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:43.808507   28964 round_trippers.go:473]     Content-Type: application/json
	I0919 17:13:43.808517   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:43.823241   28964 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0919 17:13:43.823263   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:43.823272   28964 round_trippers.go:580]     Audit-Id: e3b98db3-cf71-4693-8e16-866140093d7a
	I0919 17:13:43.823280   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:43.823288   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:43.823297   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:43.823305   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:43.823311   28964 round_trippers.go:580]     Content-Length: 171
	I0919 17:13:43.823316   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:43 GMT
	I0919 17:13:43.823334   28964 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-553715-m03","kind":"nodes","uid":"f3827816-de3c-418e-aa24-505b515ee53b"}}
	I0919 17:13:43.823364   28964 node.go:124] successfully deleted node "m03"
	I0919 17:13:43.823377   28964 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 17:13:43.823395   28964 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 17:13:43.823414   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i18qj6.6b1zwbeav83f7xxc --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-553715-m03"
	I0919 17:13:43.888666   28964 command_runner.go:130] ! W0919 17:13:43.883047    2352 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0919 17:13:43.889194   28964 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0919 17:13:44.034890   28964 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0919 17:13:44.034928   28964 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0919 17:13:44.796761   28964 command_runner.go:130] > [preflight] Running pre-flight checks
	I0919 17:13:44.796791   28964 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0919 17:13:44.796804   28964 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0919 17:13:44.796817   28964 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:13:44.796829   28964 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:13:44.796837   28964 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 17:13:44.796847   28964 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0919 17:13:44.796860   28964 command_runner.go:130] > This node has joined the cluster:
	I0919 17:13:44.796870   28964 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0919 17:13:44.796881   28964 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0919 17:13:44.796892   28964 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0919 17:13:44.796923   28964 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 17:13:45.058456   28964 start.go:306] JoinCluster complete in 4.723989452s
	I0919 17:13:45.058476   28964 cni.go:84] Creating CNI manager for ""
	I0919 17:13:45.058482   28964 cni.go:136] 3 nodes found, recommending kindnet
	I0919 17:13:45.058526   28964 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 17:13:45.064925   28964 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 17:13:45.064951   28964 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 17:13:45.064961   28964 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 17:13:45.064968   28964 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 17:13:45.064975   28964 command_runner.go:130] > Access: 2023-09-19 17:09:35.017363855 +0000
	I0919 17:13:45.064979   28964 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 17:13:45.064985   28964 command_runner.go:130] > Change: 2023-09-19 17:09:33.166363855 +0000
	I0919 17:13:45.064989   28964 command_runner.go:130] >  Birth: -
	I0919 17:13:45.065044   28964 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 17:13:45.065058   28964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 17:13:45.082471   28964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 17:13:45.483607   28964 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:13:45.483635   28964 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0919 17:13:45.483646   28964 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0919 17:13:45.483653   28964 command_runner.go:130] > daemonset.apps/kindnet configured
	I0919 17:13:45.484078   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:13:45.484273   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:13:45.484560   28964 round_trippers.go:463] GET https://192.168.39.38:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 17:13:45.484572   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.484579   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.484585   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.486899   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.486927   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.486938   28964 round_trippers.go:580]     Content-Length: 291
	I0919 17:13:45.486948   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.486956   28964 round_trippers.go:580]     Audit-Id: 272de5e9-cac4-4387-a27d-c2d3b54da6df
	I0919 17:13:45.486963   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.486968   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.486975   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.486981   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.487007   28964 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e750729b-e558-4860-b72f-4a5c78572130","resourceVersion":"888","creationTimestamp":"2023-09-19T16:59:41Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0919 17:13:45.487100   28964 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-553715" context rescaled to 1 replicas
	I0919 17:13:45.487134   28964 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.229 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 17:13:45.488976   28964 out.go:177] * Verifying Kubernetes components...
	I0919 17:13:45.490386   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:13:45.506148   28964 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:13:45.506349   28964 kapi.go:59] client config for multinode-553715: &rest.Config{Host:"https://192.168.39.38:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/multinode-553715/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:13:45.506559   28964 node_ready.go:35] waiting up to 6m0s for node "multinode-553715-m03" to be "Ready" ...
	I0919 17:13:45.506615   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:13:45.506623   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.506630   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.506636   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.509668   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:45.509693   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.509769   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.509795   28964 round_trippers.go:580]     Audit-Id: 0fd58fce-0458-40d9-9b64-de422a3b5742
	I0919 17:13:45.509805   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.509816   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.509827   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.509844   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.509955   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m03","uid":"9baa2497-f38c-421f-9475-64ef635edd1a","resourceVersion":"1218","creationTimestamp":"2023-09-19T17:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0919 17:13:45.510185   28964 node_ready.go:49] node "multinode-553715-m03" has status "Ready":"True"
	I0919 17:13:45.510199   28964 node_ready.go:38] duration metric: took 3.626366ms waiting for node "multinode-553715-m03" to be "Ready" ...
	I0919 17:13:45.510210   28964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:13:45.510277   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods
	I0919 17:13:45.510288   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.510299   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.510309   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.514656   28964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 17:13:45.514673   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.514680   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.514686   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.514691   28964 round_trippers.go:580]     Audit-Id: f4019626-07d2-4939-9413-e576eeb5fe5e
	I0919 17:13:45.514696   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.514701   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.514709   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.516083   28964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1224"},"items":[{"metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82039 chars]
	I0919 17:13:45.518607   28964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.518667   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-pffkm
	I0919 17:13:45.518675   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.518682   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.518688   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.521304   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.521322   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.521332   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.521341   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.521350   28964 round_trippers.go:580]     Audit-Id: 6542fd9c-6652-4417-8412-c819e129ae2a
	I0919 17:13:45.521358   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.521364   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.521369   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.521569   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-pffkm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fbc226fb-43a9-4e0f-ac99-614f2740485d","resourceVersion":"867","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a91dc508-5b00-490e-a22d-1e7c07855cf3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a91dc508-5b00-490e-a22d-1e7c07855cf3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0919 17:13:45.522008   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:45.522020   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.522027   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.522033   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.524127   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.524139   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.524145   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.524151   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.524157   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.524165   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.524174   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.524187   28964 round_trippers.go:580]     Audit-Id: b8218437-fd5a-49dc-9703-e69af9ecbd47
	I0919 17:13:45.524391   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:13:45.524682   28964 pod_ready.go:92] pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:45.524695   28964 pod_ready.go:81] duration metric: took 6.068745ms waiting for pod "coredns-5dd5756b68-pffkm" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.524702   28964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.524744   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-553715
	I0919 17:13:45.524750   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.524758   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.524766   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.526570   28964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 17:13:45.526581   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.526587   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.526593   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.526598   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.526603   28964 round_trippers.go:580]     Audit-Id: ee398f99-50ad-4995-b1ad-8cc4c73f45e7
	I0919 17:13:45.526608   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.526613   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.527058   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-553715","namespace":"kube-system","uid":"905a0370-ab9d-4138-bd11-12297717f1c5","resourceVersion":"890","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.38:2379","kubernetes.io/config.hash":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.mirror":"5aec1a98400bc46affacaeefcf7efa64","kubernetes.io/config.seen":"2023-09-19T16:59:41.749097727Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0919 17:13:45.527433   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:45.527448   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.527459   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.527469   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.530047   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.530061   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.530068   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.530073   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.530078   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.530083   28964 round_trippers.go:580]     Audit-Id: 9b4320b5-4b63-4e77-8a05-e96884646b4d
	I0919 17:13:45.530088   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.530094   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.530265   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:13:45.530621   28964 pod_ready.go:92] pod "etcd-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:45.530637   28964 pod_ready.go:81] duration metric: took 5.927157ms waiting for pod "etcd-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.530657   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.530707   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-553715
	I0919 17:13:45.530717   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.530728   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.530737   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.533127   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.533145   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.533155   28964 round_trippers.go:580]     Audit-Id: d8d525e5-4487-4b2e-8259-e3b7d4ad3d91
	I0919 17:13:45.533164   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.533176   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.533188   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.533200   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.533210   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.533331   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-553715","namespace":"kube-system","uid":"e2712b6a-6771-4fb1-9b6d-e50e10e45411","resourceVersion":"859","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.38:8443","kubernetes.io/config.hash":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.mirror":"e33690ea2d34a4cb01de0af39fba7d80","kubernetes.io/config.seen":"2023-09-19T16:59:41.749099288Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0919 17:13:45.533657   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:45.533667   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.533674   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.533680   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.535751   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.535770   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.535779   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.535787   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.535799   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.535810   28964 round_trippers.go:580]     Audit-Id: aacbc6b3-5779-442d-8c7a-0f90ae07b7d6
	I0919 17:13:45.535820   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.535831   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.536084   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:13:45.536356   28964 pod_ready.go:92] pod "kube-apiserver-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:45.536369   28964 pod_ready.go:81] duration metric: took 5.704839ms waiting for pod "kube-apiserver-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.536378   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.536446   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-553715
	I0919 17:13:45.536457   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.536464   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.536472   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.539559   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:45.539579   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.539587   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.539595   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.539606   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.539615   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.539626   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.539636   28964 round_trippers.go:580]     Audit-Id: 1153e54c-b286-49cb-9a07-b1eb9de94fee
	I0919 17:13:45.539796   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-553715","namespace":"kube-system","uid":"56eb8685-d2ae-4f50-8da1-dca616585190","resourceVersion":"861","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.mirror":"ff6f265dbf948bb708b38919b675e1b5","kubernetes.io/config.seen":"2023-09-19T16:59:41.749100351Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0919 17:13:45.540155   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:45.540167   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.540174   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.540182   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.542667   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.542681   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.542687   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.542692   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.542700   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.542707   28964 round_trippers.go:580]     Audit-Id: 1fd9f7f9-2aaf-4fa1-8ec0-ae080605604d
	I0919 17:13:45.542715   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.542720   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.543254   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:13:45.543599   28964 pod_ready.go:92] pod "kube-controller-manager-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:45.543615   28964 pod_ready.go:81] duration metric: took 7.224198ms waiting for pod "kube-controller-manager-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.543623   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.706893   28964 request.go:629] Waited for 163.214132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:13:45.706952   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d5vl8
	I0919 17:13:45.706959   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.706969   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.706978   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.710775   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:45.710796   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.710805   28964 round_trippers.go:580]     Audit-Id: 047ce7c5-6165-4e83-a28c-e607788447e5
	I0919 17:13:45.710813   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.710821   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.710829   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.710838   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.710851   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.711267   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d5vl8","generateName":"kube-proxy-","namespace":"kube-system","uid":"88ab05d6-264f-40d8-9c55-c58829613212","resourceVersion":"1058","creationTimestamp":"2023-09-19T17:00:35Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:00:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0919 17:13:45.907106   28964 request.go:629] Waited for 195.35048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:13:45.907166   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m02
	I0919 17:13:45.907173   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:45.907183   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:45.907192   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:45.909827   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:45.909843   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:45.909849   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:45.909855   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:45.909859   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:45.909865   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:45.909870   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:45 GMT
	I0919 17:13:45.909875   28964 round_trippers.go:580]     Audit-Id: 75b4d039-0e76-4864-8da0-6f13e9ec348a
	I0919 17:13:45.910174   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m02","uid":"1030d8e1-d1f3-42be-8dbe-2f7c852318bd","resourceVersion":"1040","creationTimestamp":"2023-09-19T17:12:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:12:01Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I0919 17:13:45.910405   28964 pod_ready.go:92] pod "kube-proxy-d5vl8" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:45.910416   28964 pod_ready.go:81] duration metric: took 366.785368ms waiting for pod "kube-proxy-d5vl8" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:45.910427   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:46.106795   28964 request.go:629] Waited for 196.298994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:13:46.106856   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:13:46.106862   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:46.106871   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:46.106881   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:46.109658   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:46.109683   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:46.109693   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:46 GMT
	I0919 17:13:46.109703   28964 round_trippers.go:580]     Audit-Id: d0b07647-277f-43de-ae65-55c6c8ec3c50
	I0919 17:13:46.109711   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:46.109723   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:46.109731   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:46.109743   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:46.109885   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gnjwl","generateName":"kube-proxy-","namespace":"kube-system","uid":"86e13bd9-e0df-4a0b-b9a7-1746bb37c23b","resourceVersion":"1222","creationTimestamp":"2023-09-19T17:01:27Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0919 17:13:46.306766   28964 request.go:629] Waited for 196.33919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:13:46.306829   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:13:46.306834   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:46.306842   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:46.306856   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:46.310059   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:46.310076   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:46.310083   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:46 GMT
	I0919 17:13:46.310089   28964 round_trippers.go:580]     Audit-Id: 130fa446-c46d-44f2-b35f-e4eb9cc53a45
	I0919 17:13:46.310094   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:46.310099   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:46.310104   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:46.310109   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:46.310306   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m03","uid":"9baa2497-f38c-421f-9475-64ef635edd1a","resourceVersion":"1218","creationTimestamp":"2023-09-19T17:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0919 17:13:46.506996   28964 request.go:629] Waited for 196.332922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:13:46.507058   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gnjwl
	I0919 17:13:46.507077   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:46.507088   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:46.507098   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:46.510231   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:46.510248   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:46.510255   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:46 GMT
	I0919 17:13:46.510260   28964 round_trippers.go:580]     Audit-Id: 81550bbe-ae23-4026-bf26-c5ff69dd1968
	I0919 17:13:46.510265   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:46.510270   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:46.510275   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:46.510280   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:46.510410   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gnjwl","generateName":"kube-proxy-","namespace":"kube-system","uid":"86e13bd9-e0df-4a0b-b9a7-1746bb37c23b","resourceVersion":"1233","creationTimestamp":"2023-09-19T17:01:27Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0919 17:13:46.707224   28964 request.go:629] Waited for 196.397121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:13:46.707279   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715-m03
	I0919 17:13:46.707284   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:46.707292   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:46.707297   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:46.710261   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:46.710284   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:46.710291   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:46.710298   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:46.710306   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:46.710315   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:46.710325   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:46 GMT
	I0919 17:13:46.710334   28964 round_trippers.go:580]     Audit-Id: a0a8bf81-68ee-4a5c-bb36-0c722d058b04
	I0919 17:13:46.710517   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715-m03","uid":"9baa2497-f38c-421f-9475-64ef635edd1a","resourceVersion":"1218","creationTimestamp":"2023-09-19T17:13:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:13:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T17:13:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0919 17:13:46.710781   28964 pod_ready.go:92] pod "kube-proxy-gnjwl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:46.710796   28964 pod_ready.go:81] duration metric: took 800.361595ms waiting for pod "kube-proxy-gnjwl" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:46.710805   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:46.907261   28964 request.go:629] Waited for 196.381619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:13:46.907320   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tvcz9
	I0919 17:13:46.907326   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:46.907333   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:46.907340   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:46.910601   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:46.910623   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:46.910629   28964 round_trippers.go:580]     Audit-Id: 54dfefe4-2a55-4230-87b8-3c6a0b3683f0
	I0919 17:13:46.910635   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:46.910640   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:46.910645   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:46.910650   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:46.910655   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:46 GMT
	I0919 17:13:46.910795   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tvcz9","generateName":"kube-proxy-","namespace":"kube-system","uid":"377d6478-cda2-47b9-8af8-cff3064e8524","resourceVersion":"825","creationTimestamp":"2023-09-19T16:59:54Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"247a2e8e-711d-4502-a560-6460001a1a35","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"247a2e8e-711d-4502-a560-6460001a1a35\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0919 17:13:47.107528   28964 request.go:629] Waited for 196.353121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:47.107580   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:47.107585   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:47.107592   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:47.107598   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:47.110519   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:47.110534   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:47.110540   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:47.110545   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:47 GMT
	I0919 17:13:47.110555   28964 round_trippers.go:580]     Audit-Id: 43920d95-279b-45ae-b0ed-f4ada2cde0e8
	I0919 17:13:47.110562   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:47.110570   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:47.110579   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:47.110789   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:13:47.111087   28964 pod_ready.go:92] pod "kube-proxy-tvcz9" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:47.111098   28964 pod_ready.go:81] duration metric: took 400.287568ms waiting for pod "kube-proxy-tvcz9" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:47.111109   28964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:47.306958   28964 request.go:629] Waited for 195.788917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:13:47.307027   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-553715
	I0919 17:13:47.307032   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:47.307039   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:47.307046   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:47.313295   28964 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 17:13:47.313314   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:47.313321   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:47.313327   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:47.313332   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:47 GMT
	I0919 17:13:47.313337   28964 round_trippers.go:580]     Audit-Id: 327e6702-febd-4fc4-9d5e-7ad02747c2f5
	I0919 17:13:47.313341   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:47.313346   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:47.313467   28964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-553715","namespace":"kube-system","uid":"27c15070-fba4-4237-b6d2-4727af1e5809","resourceVersion":"857","creationTimestamp":"2023-09-19T16:59:41Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.mirror":"aa979cb86e107ca9bf520d48522186cc","kubernetes.io/config.seen":"2023-09-19T16:59:41.749088169Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:59:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0919 17:13:47.507247   28964 request.go:629] Waited for 193.395445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:47.507309   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes/multinode-553715
	I0919 17:13:47.507314   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:47.507321   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:47.507327   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:47.511227   28964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 17:13:47.511244   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:47.511250   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:47 GMT
	I0919 17:13:47.511256   28964 round_trippers.go:580]     Audit-Id: 202c2fe7-30a3-4fdd-b097-ebcb04878da8
	I0919 17:13:47.511262   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:47.511267   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:47.511276   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:47.511284   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:47.511487   28964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-19T16:59:38Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0919 17:13:47.511784   28964 pod_ready.go:92] pod "kube-scheduler-multinode-553715" in "kube-system" namespace has status "Ready":"True"
	I0919 17:13:47.511796   28964 pod_ready.go:81] duration metric: took 400.679332ms waiting for pod "kube-scheduler-multinode-553715" in "kube-system" namespace to be "Ready" ...
	I0919 17:13:47.511805   28964 pod_ready.go:38] duration metric: took 2.001581128s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:13:47.511825   28964 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:13:47.511878   28964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:13:47.524856   28964 system_svc.go:56] duration metric: took 13.025616ms WaitForService to wait for kubelet.
	I0919 17:13:47.524877   28964 kubeadm.go:581] duration metric: took 2.037720512s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:13:47.524895   28964 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:13:47.707298   28964 request.go:629] Waited for 182.344331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.38:8443/api/v1/nodes
	I0919 17:13:47.707348   28964 round_trippers.go:463] GET https://192.168.39.38:8443/api/v1/nodes
	I0919 17:13:47.707353   28964 round_trippers.go:469] Request Headers:
	I0919 17:13:47.707360   28964 round_trippers.go:473]     Accept: application/json, */*
	I0919 17:13:47.707367   28964 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 17:13:47.710379   28964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 17:13:47.710405   28964 round_trippers.go:577] Response Headers:
	I0919 17:13:47.710420   28964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 17:13:47.710428   28964 round_trippers.go:580]     Content-Type: application/json
	I0919 17:13:47.710433   28964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9e55fb19-043e-43ce-8c0b-5bae8505fa75
	I0919 17:13:47.710438   28964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fa957d7d-91e6-4a66-b5b9-0c173d8f7c2e
	I0919 17:13:47.710444   28964 round_trippers.go:580]     Date: Tue, 19 Sep 2023 17:13:47 GMT
	I0919 17:13:47.710449   28964 round_trippers.go:580]     Audit-Id: c2be592e-dc30-42fe-87d6-73b3991f73bf
	I0919 17:13:47.710676   28964 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1237"},"items":[{"metadata":{"name":"multinode-553715","uid":"1e0da286-967e-4441-bb61-e847645b1f43","resourceVersion":"898","creationTimestamp":"2023-09-19T16:59:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-553715","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-553715","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_59_42_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15133 chars]
	I0919 17:13:47.711287   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:13:47.711306   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:13:47.711315   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:13:47.711319   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:13:47.711323   28964 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:13:47.711328   28964 node_conditions.go:123] node cpu capacity is 2
	I0919 17:13:47.711332   28964 node_conditions.go:105] duration metric: took 186.432602ms to run NodePressure ...
	I0919 17:13:47.711341   28964 start.go:228] waiting for startup goroutines ...
	I0919 17:13:47.711361   28964 start.go:242] writing updated cluster config ...
	I0919 17:13:47.711625   28964 ssh_runner.go:195] Run: rm -f paused
	I0919 17:13:47.759240   28964 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:13:47.762346   28964 out.go:177] * Done! kubectl is now configured to use "multinode-553715" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:09:33 UTC, ends at Tue 2023-09-19 17:13:49 UTC. --
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.873728388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695143628873714325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ae691cea-a4f2-4c0d-ac22-2f46c65cc9b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.874234122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e90179c1-49d9-4fb9-891a-1f6d7e34ba8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.874312443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e90179c1-49d9-4fb9-891a-1f6d7e34ba8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.875833989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45828de8df77bb1d68dc782e77ee3cc51289f82cfa7b3052ed5b871bdee2c437,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695143439059110683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2e18439d19b3aab7ff690fa2cefe0d83c5ae5fdfb7ca21b6d9436f883e2d04,PodSandboxId:13ff87f614dc214ff5f6026531cc2a48aa5022fa068c0f103af4934224224fad,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695143418177716287,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b8aa6e7fd0d12bac7d0f7ac1a0a37e9827669699ebfc7bc2f2d75612adbc1,PodSandboxId:ab771809fdc03cd0fc884c38340a4e0bdef4e8b9e795fcab3680cf9ac5c1d677,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695143415364961507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e108702adb12f131413d4e0c52978a91b5625d26563fd8b85188f8592bbd0a55,PodSandboxId:fa32142853dc1b0b3a533e421811859624ffaf4698c49e38788fa63be9c8870c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695143410355489092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1883038fcf6477d6f66ce67cd8539e978150cce6f9a3953dda19c796cba8c9,PodSandboxId:93aa17a3703874fd1529121970f7118a1d1a8e1a2087aef472bb2e45b47398cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695143408484037777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51e88c31b4668d28e50d1cf37481240da500728f1965441b8dc110937f036ee,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695143408220808373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a
61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfb955fdf66e6ac0e1822f61d1ee9bd0e5df6686de332faf3b8e01912cfc99b,PodSandboxId:b1eafbc444bb9f08a734f003973fbc141bedd2444a3e16f76dc496a3ca85c561,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695143401448300183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9ce5c10eb4be1db0d36a793e24540e04104e79b5fa6c6ba055d622f13a43cc,PodSandboxId:190cc8a9fbd565f70c39975ec1494d3a5a3b9e611f1ed736f6bdb1f551a6d080,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695143401283472720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.container.has
h: 9a550da8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0dd131477d762d21733f3e5637213f7cb6ad2f71ae797db3359dc07c5ca912,PodSandboxId:2461c9bbb9e3dbb33d3330174a6e35707cf1cf928efc6541e7d82b9df7238e4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695143401025947861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b675e1b5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb5ec0547e93ddc1987905533929ada3c843e38f38b7e9f02228f0603d15c87,PodSandboxId:6363fab342a001456be52770b7d52bc1b80c7c58714f0b7f93d8f2b9e448e66f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695143400748783780,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a39fc94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e90179c1-49d9-4fb9-891a-1f6d7e34ba8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.925084575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e88064b1-b69e-4b2a-8b66-cf2c2f84ac1c name=/runtime.v1.RuntimeService/Version
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.925163265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e88064b1-b69e-4b2a-8b66-cf2c2f84ac1c name=/runtime.v1.RuntimeService/Version
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.926432275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0fa26c11-33ca-4f00-b91f-c902bec43601 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.926859235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695143628926844628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0fa26c11-33ca-4f00-b91f-c902bec43601 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.927341044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eace1cc2-4b6e-4139-a62c-b17a13cc4a81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.927454149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eace1cc2-4b6e-4139-a62c-b17a13cc4a81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.927653677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45828de8df77bb1d68dc782e77ee3cc51289f82cfa7b3052ed5b871bdee2c437,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695143439059110683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2e18439d19b3aab7ff690fa2cefe0d83c5ae5fdfb7ca21b6d9436f883e2d04,PodSandboxId:13ff87f614dc214ff5f6026531cc2a48aa5022fa068c0f103af4934224224fad,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695143418177716287,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b8aa6e7fd0d12bac7d0f7ac1a0a37e9827669699ebfc7bc2f2d75612adbc1,PodSandboxId:ab771809fdc03cd0fc884c38340a4e0bdef4e8b9e795fcab3680cf9ac5c1d677,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695143415364961507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e108702adb12f131413d4e0c52978a91b5625d26563fd8b85188f8592bbd0a55,PodSandboxId:fa32142853dc1b0b3a533e421811859624ffaf4698c49e38788fa63be9c8870c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695143410355489092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1883038fcf6477d6f66ce67cd8539e978150cce6f9a3953dda19c796cba8c9,PodSandboxId:93aa17a3703874fd1529121970f7118a1d1a8e1a2087aef472bb2e45b47398cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695143408484037777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51e88c31b4668d28e50d1cf37481240da500728f1965441b8dc110937f036ee,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695143408220808373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a
61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfb955fdf66e6ac0e1822f61d1ee9bd0e5df6686de332faf3b8e01912cfc99b,PodSandboxId:b1eafbc444bb9f08a734f003973fbc141bedd2444a3e16f76dc496a3ca85c561,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695143401448300183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9ce5c10eb4be1db0d36a793e24540e04104e79b5fa6c6ba055d622f13a43cc,PodSandboxId:190cc8a9fbd565f70c39975ec1494d3a5a3b9e611f1ed736f6bdb1f551a6d080,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695143401283472720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.container.has
h: 9a550da8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0dd131477d762d21733f3e5637213f7cb6ad2f71ae797db3359dc07c5ca912,PodSandboxId:2461c9bbb9e3dbb33d3330174a6e35707cf1cf928efc6541e7d82b9df7238e4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695143401025947861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b675e1b5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb5ec0547e93ddc1987905533929ada3c843e38f38b7e9f02228f0603d15c87,PodSandboxId:6363fab342a001456be52770b7d52bc1b80c7c58714f0b7f93d8f2b9e448e66f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695143400748783780,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a39fc94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eace1cc2-4b6e-4139-a62c-b17a13cc4a81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.969525552Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=989dddec-8260-48fd-b0e7-b4008bc2acaa name=/runtime.v1.RuntimeService/Version
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.969608826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=989dddec-8260-48fd-b0e7-b4008bc2acaa name=/runtime.v1.RuntimeService/Version
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.970568978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=16338609-9432-4632-8178-2078353fe0a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.971022368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695143628970998843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=16338609-9432-4632-8178-2078353fe0a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.971727087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b3b23c81-0a05-4593-96a0-311588c6ab0e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.971806962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b3b23c81-0a05-4593-96a0-311588c6ab0e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:48 multinode-553715 crio[714]: time="2023-09-19 17:13:48.972055388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45828de8df77bb1d68dc782e77ee3cc51289f82cfa7b3052ed5b871bdee2c437,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695143439059110683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2e18439d19b3aab7ff690fa2cefe0d83c5ae5fdfb7ca21b6d9436f883e2d04,PodSandboxId:13ff87f614dc214ff5f6026531cc2a48aa5022fa068c0f103af4934224224fad,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695143418177716287,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b8aa6e7fd0d12bac7d0f7ac1a0a37e9827669699ebfc7bc2f2d75612adbc1,PodSandboxId:ab771809fdc03cd0fc884c38340a4e0bdef4e8b9e795fcab3680cf9ac5c1d677,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695143415364961507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e108702adb12f131413d4e0c52978a91b5625d26563fd8b85188f8592bbd0a55,PodSandboxId:fa32142853dc1b0b3a533e421811859624ffaf4698c49e38788fa63be9c8870c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695143410355489092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1883038fcf6477d6f66ce67cd8539e978150cce6f9a3953dda19c796cba8c9,PodSandboxId:93aa17a3703874fd1529121970f7118a1d1a8e1a2087aef472bb2e45b47398cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695143408484037777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51e88c31b4668d28e50d1cf37481240da500728f1965441b8dc110937f036ee,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695143408220808373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a
61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfb955fdf66e6ac0e1822f61d1ee9bd0e5df6686de332faf3b8e01912cfc99b,PodSandboxId:b1eafbc444bb9f08a734f003973fbc141bedd2444a3e16f76dc496a3ca85c561,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695143401448300183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9ce5c10eb4be1db0d36a793e24540e04104e79b5fa6c6ba055d622f13a43cc,PodSandboxId:190cc8a9fbd565f70c39975ec1494d3a5a3b9e611f1ed736f6bdb1f551a6d080,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695143401283472720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.container.has
h: 9a550da8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0dd131477d762d21733f3e5637213f7cb6ad2f71ae797db3359dc07c5ca912,PodSandboxId:2461c9bbb9e3dbb33d3330174a6e35707cf1cf928efc6541e7d82b9df7238e4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695143401025947861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b675e1b5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb5ec0547e93ddc1987905533929ada3c843e38f38b7e9f02228f0603d15c87,PodSandboxId:6363fab342a001456be52770b7d52bc1b80c7c58714f0b7f93d8f2b9e448e66f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695143400748783780,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a39fc94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b3b23c81-0a05-4593-96a0-311588c6ab0e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.013362236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e8f8eb1d-55f8-41c3-8394-babd377f2c2b name=/runtime.v1.RuntimeService/Version
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.013510097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e8f8eb1d-55f8-41c3-8394-babd377f2c2b name=/runtime.v1.RuntimeService/Version
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.014358232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1273dda7-1aab-4b71-b908-d45b42043b96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.014851951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695143629014839853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1273dda7-1aab-4b71-b908-d45b42043b96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.015522984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2cbfbde7-5d07-4146-a810-59ff48703f64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.015597674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2cbfbde7-5d07-4146-a810-59ff48703f64 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:13:49 multinode-553715 crio[714]: time="2023-09-19 17:13:49.015788289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45828de8df77bb1d68dc782e77ee3cc51289f82cfa7b3052ed5b871bdee2c437,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695143439059110683,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2e18439d19b3aab7ff690fa2cefe0d83c5ae5fdfb7ca21b6d9436f883e2d04,PodSandboxId:13ff87f614dc214ff5f6026531cc2a48aa5022fa068c0f103af4934224224fad,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1695143418177716287,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-xj8tc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b92501a7-dae6-46bb-afb7-2ea5795f162d,},Annotations:map[string]string{io.kubernetes.container.hash: 8d866c54,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40b8aa6e7fd0d12bac7d0f7ac1a0a37e9827669699ebfc7bc2f2d75612adbc1,PodSandboxId:ab771809fdc03cd0fc884c38340a4e0bdef4e8b9e795fcab3680cf9ac5c1d677,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695143415364961507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-pffkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc226fb-43a9-4e0f-ac99-614f2740485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7b97018c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e108702adb12f131413d4e0c52978a91b5625d26563fd8b85188f8592bbd0a55,PodSandboxId:fa32142853dc1b0b3a533e421811859624ffaf4698c49e38788fa63be9c8870c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1695143410355489092,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-lmmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 2479ec2b-6cd3-4fb2-b85f-43b175cfbb79,},Annotations:map[string]string{io.kubernetes.container.hash: 3da7acbd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1883038fcf6477d6f66ce67cd8539e978150cce6f9a3953dda19c796cba8c9,PodSandboxId:93aa17a3703874fd1529121970f7118a1d1a8e1a2087aef472bb2e45b47398cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695143408484037777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tvcz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377d6478-cda2-47b9-8af8-cff306
4e8524,},Annotations:map[string]string{io.kubernetes.container.hash: 11af597a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51e88c31b4668d28e50d1cf37481240da500728f1965441b8dc110937f036ee,PodSandboxId:0a4c4612e941a6a644c45f0b600ca00064c2b629a335bd219de41fa9d692a31d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695143408220808373,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e1b33b9-5b0d-48d5-92d8-4dc1f58a
61d8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e7ea2f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcfb955fdf66e6ac0e1822f61d1ee9bd0e5df6686de332faf3b8e01912cfc99b,PodSandboxId:b1eafbc444bb9f08a734f003973fbc141bedd2444a3e16f76dc496a3ca85c561,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695143401448300183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa979cb86e107ca9bf520d48522186cc,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9ce5c10eb4be1db0d36a793e24540e04104e79b5fa6c6ba055d622f13a43cc,PodSandboxId:190cc8a9fbd565f70c39975ec1494d3a5a3b9e611f1ed736f6bdb1f551a6d080,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695143401283472720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aec1a98400bc46affacaeefcf7efa64,},Annotations:map[string]string{io.kubernetes.container.has
h: 9a550da8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0dd131477d762d21733f3e5637213f7cb6ad2f71ae797db3359dc07c5ca912,PodSandboxId:2461c9bbb9e3dbb33d3330174a6e35707cf1cf928efc6541e7d82b9df7238e4a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695143401025947861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff6f265dbf948bb708b38919b675e1b5,},Annotations:map[string]string{io.
kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fb5ec0547e93ddc1987905533929ada3c843e38f38b7e9f02228f0603d15c87,PodSandboxId:6363fab342a001456be52770b7d52bc1b80c7c58714f0b7f93d8f2b9e448e66f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695143400748783780,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-553715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e33690ea2d34a4cb01de0af39fba7d80,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a39fc94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2cbfbde7-5d07-4146-a810-59ff48703f64 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45828de8df77b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   0a4c4612e941a       storage-provisioner
	fa2e18439d19b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   13ff87f614dc2       busybox-5bc68d56bd-xj8tc
	b40b8aa6e7fd0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   ab771809fdc03       coredns-5dd5756b68-pffkm
	e108702adb12f       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   fa32142853dc1       kindnet-lmmc5
	2b1883038fcf6       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      3 minutes ago       Running             kube-proxy                1                   93aa17a370387       kube-proxy-tvcz9
	a51e88c31b466       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   0a4c4612e941a       storage-provisioner
	fcfb955fdf66e       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      3 minutes ago       Running             kube-scheduler            1                   b1eafbc444bb9       kube-scheduler-multinode-553715
	2e9ce5c10eb4b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   190cc8a9fbd56       etcd-multinode-553715
	3f0dd131477d7       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      3 minutes ago       Running             kube-controller-manager   1                   2461c9bbb9e3d       kube-controller-manager-multinode-553715
	4fb5ec0547e93       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      3 minutes ago       Running             kube-apiserver            1                   6363fab342a00       kube-apiserver-multinode-553715
	
	* 
	* ==> coredns [b40b8aa6e7fd0d12bac7d0f7ac1a0a37e9827669699ebfc7bc2f2d75612adbc1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52767 - 51454 "HINFO IN 7989222544527872409.4856048781650830470. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010982152s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-553715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=multinode-553715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T16_59_42_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:59:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553715
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:13:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:10:36 +0000   Tue, 19 Sep 2023 16:59:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:10:36 +0000   Tue, 19 Sep 2023 16:59:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:10:36 +0000   Tue, 19 Sep 2023 16:59:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:10:36 +0000   Tue, 19 Sep 2023 17:10:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    multinode-553715
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c97d65ac98704ad7a5677568b3778fc7
	  System UUID:                c97d65ac-9870-4ad7-a567-7568b3778fc7
	  Boot ID:                    260034f1-3f9e-4d0e-a1f8-eaddd1f387c6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xj8tc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-pffkm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-553715                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-lmmc5                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-553715             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-553715    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tvcz9                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-553715             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-553715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-553715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-553715 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-553715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-553715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-553715 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-553715 event: Registered Node multinode-553715 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-553715 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-553715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-553715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-553715 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-553715 event: Registered Node multinode-553715 in Controller
	
	
	Name:               multinode-553715-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553715-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:12:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-553715-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:13:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:12:01 +0000   Tue, 19 Sep 2023 17:12:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:12:01 +0000   Tue, 19 Sep 2023 17:12:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:12:01 +0000   Tue, 19 Sep 2023 17:12:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:12:01 +0000   Tue, 19 Sep 2023 17:12:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    multinode-553715-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0239eab67234c52995c51bf9e0aa8db
	  System UUID:                e0239eab-6723-4c52-995c-51bf9e0aa8db
	  Boot ID:                    eca459e4-d007-4468-8d0d-7543c98e0af9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-46x44    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-ccllv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-d5vl8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 106s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-553715-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-553715-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-553715-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m52s                  kubelet     Node multinode-553715-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m15s (x2 over 3m15s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       111s                   kubelet     Node multinode-553715-m02 status is now: NodeNotSchedulable
	  Normal   NodeReady                111s (x2 over 13m)     kubelet     Node multinode-553715-m02 status is now: NodeReady
	  Normal   Starting                 108s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  108s (x2 over 108s)    kubelet     Node multinode-553715-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    108s (x2 over 108s)    kubelet     Node multinode-553715-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s (x2 over 108s)    kubelet     Node multinode-553715-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  108s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                   kubelet     Node multinode-553715-m02 status is now: NodeReady
	
	
	Name:               multinode-553715-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-553715-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-553715-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:13:44 +0000   Tue, 19 Sep 2023 17:13:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:13:44 +0000   Tue, 19 Sep 2023 17:13:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:13:44 +0000   Tue, 19 Sep 2023 17:13:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:13:44 +0000   Tue, 19 Sep 2023 17:13:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-553715-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ae382454edd4eab886d9e2ab39d879c
	  System UUID:                2ae38245-4edd-4eab-886d-9e2ab39d879c
	  Boot ID:                    b291541b-2193-434b-8a24-d33d480a874d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fs98x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kindnet-s8d6g               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-gnjwl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From             Message
	  ----     ------                   ----                ----             -------
	  Normal   Starting                 11m                 kube-proxy       
	  Normal   Starting                 12m                 kube-proxy       
	  Normal   Starting                 3s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet          Node multinode-553715-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet          Node multinode-553715-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet          Node multinode-553715-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                 kubelet          Node multinode-553715-m03 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  11m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet          Node multinode-553715-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet          Node multinode-553715-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet          Node multinode-553715-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                11m                 kubelet          Node multinode-553715-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                 kubelet          Node multinode-553715-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet          Node multinode-553715-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet          Node multinode-553715-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet          Node multinode-553715-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet          Node multinode-553715-m03 status is now: NodeReady
	  Normal   RegisteredNode           1s                  node-controller  Node multinode-553715-m03 event: Registered Node multinode-553715-m03 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.356421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.307856] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153330] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.631836] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.155045] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.108448] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.152453] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.115252] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.201754] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +16.525983] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[Sep19 17:10] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [2e9ce5c10eb4be1db0d36a793e24540e04104e79b5fa6c6ba055d622f13a43cc] <==
	* {"level":"info","ts":"2023-09-19T17:10:02.910203Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:10:02.910229Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:10:02.910509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da switched to configuration voters=(4085449137511063770)"}
	{"level":"info","ts":"2023-09-19T17:10:02.91059Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","added-peer-id":"38b26e584d45e0da","added-peer-peer-urls":["https://192.168.39.38:2380"]}
	{"level":"info","ts":"2023-09-19T17:10:02.910691Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"afb1a6a08b4dab74","local-member-id":"38b26e584d45e0da","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:10:02.910734Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:10:02.915987Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T17:10:02.916249Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"38b26e584d45e0da","initial-advertise-peer-urls":["https://192.168.39.38:2380"],"listen-peer-urls":["https://192.168.39.38:2380"],"advertise-client-urls":["https://192.168.39.38:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.38:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T17:10:02.916094Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2023-09-19T17:10:02.91687Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.38:2380"}
	{"level":"info","ts":"2023-09-19T17:10:02.916807Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T17:10:04.57272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-19T17:10:04.572888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:10:04.572946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgPreVoteResp from 38b26e584d45e0da at term 2"}
	{"level":"info","ts":"2023-09-19T17:10:04.572987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became candidate at term 3"}
	{"level":"info","ts":"2023-09-19T17:10:04.573011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da received MsgVoteResp from 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2023-09-19T17:10:04.573044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b26e584d45e0da became leader at term 3"}
	{"level":"info","ts":"2023-09-19T17:10:04.573073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 38b26e584d45e0da elected leader 38b26e584d45e0da at term 3"}
	{"level":"info","ts":"2023-09-19T17:10:04.576015Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"38b26e584d45e0da","local-member-attributes":"{Name:multinode-553715 ClientURLs:[https://192.168.39.38:2379]}","request-path":"/0/members/38b26e584d45e0da/attributes","cluster-id":"afb1a6a08b4dab74","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:10:04.576308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:10:04.576545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:10:04.576598Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:10:04.576781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:10:04.578054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.38:2379"}
	{"level":"info","ts":"2023-09-19T17:10:04.578057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:13:49 up 4 min,  0 users,  load average: 0.34, 0.22, 0.10
	Linux multinode-553715 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [e108702adb12f131413d4e0c52978a91b5625d26563fd8b85188f8592bbd0a55] <==
	* I0919 17:13:01.990611       1 main.go:250] Node multinode-553715-m03 has CIDR [10.244.3.0/24] 
	I0919 17:13:12.004101       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:13:12.004151       1 main.go:227] handling current node
	I0919 17:13:12.004162       1 main.go:223] Handling node with IPs: map[192.168.39.11:{}]
	I0919 17:13:12.004168       1 main.go:250] Node multinode-553715-m02 has CIDR [10.244.1.0/24] 
	I0919 17:13:12.004260       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0919 17:13:12.004294       1 main.go:250] Node multinode-553715-m03 has CIDR [10.244.3.0/24] 
	I0919 17:13:22.011745       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:13:22.011831       1 main.go:227] handling current node
	I0919 17:13:22.011856       1 main.go:223] Handling node with IPs: map[192.168.39.11:{}]
	I0919 17:13:22.011888       1 main.go:250] Node multinode-553715-m02 has CIDR [10.244.1.0/24] 
	I0919 17:13:22.012000       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0919 17:13:22.012020       1 main.go:250] Node multinode-553715-m03 has CIDR [10.244.3.0/24] 
	I0919 17:13:32.024548       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:13:32.024602       1 main.go:227] handling current node
	I0919 17:13:32.024614       1 main.go:223] Handling node with IPs: map[192.168.39.11:{}]
	I0919 17:13:32.024619       1 main.go:250] Node multinode-553715-m02 has CIDR [10.244.1.0/24] 
	I0919 17:13:32.024724       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0919 17:13:32.024729       1 main.go:250] Node multinode-553715-m03 has CIDR [10.244.3.0/24] 
	I0919 17:13:42.034829       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0919 17:13:42.035055       1 main.go:227] handling current node
	I0919 17:13:42.035100       1 main.go:223] Handling node with IPs: map[192.168.39.11:{}]
	I0919 17:13:42.035121       1 main.go:250] Node multinode-553715-m02 has CIDR [10.244.1.0/24] 
	I0919 17:13:42.035269       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0919 17:13:42.035290       1 main.go:250] Node multinode-553715-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [4fb5ec0547e93ddc1987905533929ada3c843e38f38b7e9f02228f0603d15c87] <==
	* I0919 17:10:05.929909       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 17:10:05.930003       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 17:10:06.021853       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0919 17:10:06.021896       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0919 17:10:06.021904       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0919 17:10:06.027678       1 aggregator.go:166] initial CRD sync complete...
	I0919 17:10:06.027730       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 17:10:06.027737       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 17:10:06.028572       1 shared_informer.go:318] Caches are synced for configmaps
	I0919 17:10:06.028726       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0919 17:10:06.126074       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 17:10:06.126146       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 17:10:06.128880       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:10:06.128987       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 17:10:06.129513       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:10:06.129717       1 cache.go:39] Caches are synced for autoregister controller
	I0919 17:10:06.145072       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0919 17:10:06.941661       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 17:10:08.576195       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 17:10:08.757534       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 17:10:08.774973       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 17:10:08.869017       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:10:08.883609       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 17:10:19.003873       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 17:10:19.103807       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [3f0dd131477d762d21733f3e5637213f7cb6ad2f71ae797db3359dc07c5ca912] <==
	* I0919 17:12:01.446481       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-553715-m02\" does not exist"
	I0919 17:12:01.457272       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-553715-m02" podCIDRs=["10.244.1.0/24"]
	I0919 17:12:01.589848       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553715-m02"
	I0919 17:12:02.185630       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.847082ms"
	I0919 17:12:02.185796       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40.51µs"
	I0919 17:12:02.358692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.966µs"
	I0919 17:12:13.616559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="194.333µs"
	I0919 17:12:14.219348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.799µs"
	I0919 17:12:14.223852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="112.008µs"
	I0919 17:12:39.219868       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553715-m02"
	I0919 17:13:40.811556       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-46x44"
	I0919 17:13:40.837108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.3073ms"
	I0919 17:13:40.856361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.175443ms"
	I0919 17:13:40.856525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.938µs"
	I0919 17:13:42.510999       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.39293ms"
	I0919 17:13:42.511163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.016µs"
	I0919 17:13:43.818079       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553715-m02"
	I0919 17:13:43.976236       1 event.go:307] "Event occurred" object="multinode-553715-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-553715-m03 event: Removing Node multinode-553715-m03 from Controller"
	I0919 17:13:44.483520       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553715-m02"
	I0919 17:13:44.483954       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-553715-m03\" does not exist"
	I0919 17:13:44.484002       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-fs98x" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-fs98x"
	I0919 17:13:44.508735       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-553715-m03" podCIDRs=["10.244.2.0/24"]
	I0919 17:13:44.622799       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-553715-m02"
	I0919 17:13:45.385312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.753µs"
	I0919 17:13:48.977621       1 event.go:307] "Event occurred" object="multinode-553715-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-553715-m03 event: Registered Node multinode-553715-m03 in Controller"
	
	* 
	* ==> kube-proxy [2b1883038fcf6477d6f66ce67cd8539e978150cce6f9a3953dda19c796cba8c9] <==
	* I0919 17:10:08.767538       1 server_others.go:69] "Using iptables proxy"
	I0919 17:10:08.783016       1 node.go:141] Successfully retrieved node IP: 192.168.39.38
	I0919 17:10:08.900315       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:10:08.900364       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:10:08.921684       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:10:08.921756       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:10:08.921951       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:10:08.921962       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:10:08.925755       1 config.go:188] "Starting service config controller"
	I0919 17:10:08.925808       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:10:08.925856       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:10:08.925876       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:10:08.931521       1 config.go:315] "Starting node config controller"
	I0919 17:10:08.931891       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:10:09.026945       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:10:09.026949       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:10:09.032441       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fcfb955fdf66e6ac0e1822f61d1ee9bd0e5df6686de332faf3b8e01912cfc99b] <==
	* I0919 17:10:03.374326       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:10:06.048044       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:10:06.048453       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:10:06.048599       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:10:06.048634       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:10:06.083522       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:10:06.083646       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:10:06.111497       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 17:10:06.113459       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 17:10:06.118515       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 17:10:06.118824       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 17:10:06.216521       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:09:33 UTC, ends at Tue 2023-09-19 17:13:49 UTC. --
	Sep 19 17:10:08 multinode-553715 kubelet[920]: E0919 17:10:08.460761     920 projected.go:198] Error preparing data for projected volume kube-api-access-v97x7 for pod default/busybox-5bc68d56bd-xj8tc: object "default"/"kube-root-ca.crt" not registered
	Sep 19 17:10:08 multinode-553715 kubelet[920]: E0919 17:10:08.460809     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b92501a7-dae6-46bb-afb7-2ea5795f162d-kube-api-access-v97x7 podName:b92501a7-dae6-46bb-afb7-2ea5795f162d nodeName:}" failed. No retries permitted until 2023-09-19 17:10:10.460792027 +0000 UTC m=+10.864714607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v97x7" (UniqueName: "kubernetes.io/projected/b92501a7-dae6-46bb-afb7-2ea5795f162d-kube-api-access-v97x7") pod "busybox-5bc68d56bd-xj8tc" (UID: "b92501a7-dae6-46bb-afb7-2ea5795f162d") : object "default"/"kube-root-ca.crt" not registered
	Sep 19 17:10:08 multinode-553715 kubelet[920]: E0919 17:10:08.849809     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-pffkm" podUID="fbc226fb-43a9-4e0f-ac99-614f2740485d"
	Sep 19 17:10:08 multinode-553715 kubelet[920]: E0919 17:10:08.850230     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-xj8tc" podUID="b92501a7-dae6-46bb-afb7-2ea5795f162d"
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.375698     920 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.375751     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbc226fb-43a9-4e0f-ac99-614f2740485d-config-volume podName:fbc226fb-43a9-4e0f-ac99-614f2740485d nodeName:}" failed. No retries permitted until 2023-09-19 17:10:14.375739202 +0000 UTC m=+14.779661766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fbc226fb-43a9-4e0f-ac99-614f2740485d-config-volume") pod "coredns-5dd5756b68-pffkm" (UID: "fbc226fb-43a9-4e0f-ac99-614f2740485d") : object "kube-system"/"coredns" not registered
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.476274     920 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.476304     920 projected.go:198] Error preparing data for projected volume kube-api-access-v97x7 for pod default/busybox-5bc68d56bd-xj8tc: object "default"/"kube-root-ca.crt" not registered
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.476478     920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b92501a7-dae6-46bb-afb7-2ea5795f162d-kube-api-access-v97x7 podName:b92501a7-dae6-46bb-afb7-2ea5795f162d nodeName:}" failed. No retries permitted until 2023-09-19 17:10:14.476457244 +0000 UTC m=+14.880379823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v97x7" (UniqueName: "kubernetes.io/projected/b92501a7-dae6-46bb-afb7-2ea5795f162d-kube-api-access-v97x7") pod "busybox-5bc68d56bd-xj8tc" (UID: "b92501a7-dae6-46bb-afb7-2ea5795f162d") : object "default"/"kube-root-ca.crt" not registered
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.850163     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-xj8tc" podUID="b92501a7-dae6-46bb-afb7-2ea5795f162d"
	Sep 19 17:10:10 multinode-553715 kubelet[920]: E0919 17:10:10.850264     920 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-pffkm" podUID="fbc226fb-43a9-4e0f-ac99-614f2740485d"
	Sep 19 17:10:12 multinode-553715 kubelet[920]: I0919 17:10:12.265224     920 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 19 17:10:39 multinode-553715 kubelet[920]: I0919 17:10:39.036774     920 scope.go:117] "RemoveContainer" containerID="a51e88c31b4668d28e50d1cf37481240da500728f1965441b8dc110937f036ee"
	Sep 19 17:10:59 multinode-553715 kubelet[920]: E0919 17:10:59.869586     920 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:10:59 multinode-553715 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:10:59 multinode-553715 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:10:59 multinode-553715 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:11:59 multinode-553715 kubelet[920]: E0919 17:11:59.869096     920 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:11:59 multinode-553715 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:11:59 multinode-553715 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:11:59 multinode-553715 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:12:59 multinode-553715 kubelet[920]: E0919 17:12:59.877025     920 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:12:59 multinode-553715 kubelet[920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:12:59 multinode-553715 kubelet[920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:12:59 multinode-553715 kubelet[920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-553715 -n multinode-553715
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-553715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (688.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553715 stop: exit status 82 (2m0.977210485s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-553715"  ...
	* Stopping node "multinode-553715"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-553715 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553715 status: exit status 3 (18.608602582s)

                                                
                                                
-- stdout --
	multinode-553715
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-553715-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:16:11.828695   31272 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0919 17:16:11.828741   31272 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-553715 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553715 -n multinode-553715
E0919 17:16:14.062027   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-553715 -n multinode-553715: exit status 3 (3.14875053s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:16:15.156751   31371 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0919 17:16:15.156770   31371 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-553715" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.74s)

                                                
                                    
x
+
TestPreload (263.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-766296 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0919 17:25:59.333157   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:26:14.060568   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-766296 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m37.073770516s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-766296 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-766296 image pull gcr.io/k8s-minikube/busybox: (2.819677025s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-766296
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-766296: (8.083984023s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-766296 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0919 17:27:56.282015   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:28:21.263666   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-766296 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.818772583s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-766296 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-09-19 17:28:56.879371684 +0000 UTC m=+3269.811355721
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-766296 -n test-preload-766296
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-766296 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-766296 logs -n 25: (1.083476393s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n multinode-553715 sudo cat                                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /home/docker/cp-test_multinode-553715-m03_multinode-553715.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt                       | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m02:/home/docker/cp-test_multinode-553715-m03_multinode-553715-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n                                                                 | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | multinode-553715-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-553715 ssh -n multinode-553715-m02 sudo cat                                   | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | /home/docker/cp-test_multinode-553715-m03_multinode-553715-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-553715 node stop m03                                                          | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	| node    | multinode-553715 node start                                                             | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:02 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-553715                                                                | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:02 UTC |                     |
	| stop    | -p multinode-553715                                                                     | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:02 UTC |                     |
	| start   | -p multinode-553715                                                                     | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:04 UTC | 19 Sep 23 17:13 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-553715                                                                | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:13 UTC |                     |
	| node    | multinode-553715 node delete                                                            | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:13 UTC | 19 Sep 23 17:13 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-553715 stop                                                                   | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:13 UTC |                     |
	| start   | -p multinode-553715                                                                     | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:16 UTC | 19 Sep 23 17:23 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-553715                                                                | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:23 UTC |                     |
	| start   | -p multinode-553715-m02                                                                 | multinode-553715-m02 | jenkins | v1.31.2 | 19 Sep 23 17:23 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-553715-m03                                                                 | multinode-553715-m03 | jenkins | v1.31.2 | 19 Sep 23 17:23 UTC | 19 Sep 23 17:24 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-553715                                                                 | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:24 UTC |                     |
	| delete  | -p multinode-553715-m03                                                                 | multinode-553715-m03 | jenkins | v1.31.2 | 19 Sep 23 17:24 UTC | 19 Sep 23 17:24 UTC |
	| delete  | -p multinode-553715                                                                     | multinode-553715     | jenkins | v1.31.2 | 19 Sep 23 17:24 UTC | 19 Sep 23 17:24 UTC |
	| start   | -p test-preload-766296                                                                  | test-preload-766296  | jenkins | v1.31.2 | 19 Sep 23 17:24 UTC | 19 Sep 23 17:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-766296 image pull                                                          | test-preload-766296  | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-766296                                                                  | test-preload-766296  | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	| start   | -p test-preload-766296                                                                  | test-preload-766296  | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:28 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-766296 image list                                                          | test-preload-766296  | jenkins | v1.31.2 | 19 Sep 23 17:28 UTC | 19 Sep 23 17:28 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:27:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:27:23.888940   34650 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:27:23.889194   34650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:27:23.889203   34650 out.go:309] Setting ErrFile to fd 2...
	I0919 17:27:23.889208   34650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:27:23.889368   34650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:27:23.889869   34650 out.go:303] Setting JSON to false
	I0919 17:27:23.890725   34650 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4194,"bootTime":1695140250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:27:23.890782   34650 start.go:138] virtualization: kvm guest
	I0919 17:27:23.893022   34650 out.go:177] * [test-preload-766296] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:27:23.894922   34650 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:27:23.896264   34650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:27:23.894972   34650 notify.go:220] Checking for updates...
	I0919 17:27:23.897724   34650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:27:23.899180   34650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:27:23.900482   34650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:27:23.901787   34650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:27:23.903375   34650 config.go:182] Loaded profile config "test-preload-766296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0919 17:27:23.903720   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:27:23.903759   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:23.917873   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0919 17:27:23.918259   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:23.918747   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:27:23.918766   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:23.919110   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:23.919291   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:27:23.921081   34650 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:27:23.922343   34650 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:27:23.922607   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:27:23.922644   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:23.936594   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0919 17:27:23.936925   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:23.937388   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:27:23.937433   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:23.937743   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:23.937912   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:27:23.972302   34650 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:27:23.973687   34650 start.go:298] selected driver: kvm2
	I0919 17:27:23.973698   34650 start.go:902] validating driver "kvm2" against &{Name:test-preload-766296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-766296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:27:23.973809   34650 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:27:23.974448   34650 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:27:23.974519   34650 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:27:23.988770   34650 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:27:23.989066   34650 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:27:23.989100   34650 cni.go:84] Creating CNI manager for ""
	I0919 17:27:23.989113   34650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:27:23.989122   34650 start_flags.go:321] config:
	{Name:test-preload-766296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-766296 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:27:23.989250   34650 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:27:23.991022   34650 out.go:177] * Starting control plane node test-preload-766296 in cluster test-preload-766296
	I0919 17:27:23.992554   34650 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0919 17:27:24.101797   34650 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0919 17:27:24.101831   34650 cache.go:57] Caching tarball of preloaded images
	I0919 17:27:24.101975   34650 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0919 17:27:24.103768   34650 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0919 17:27:24.105136   34650 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0919 17:27:24.224140   34650 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0919 17:27:47.152495   34650 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0919 17:27:47.152596   34650 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0919 17:27:48.057041   34650 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0919 17:27:48.057201   34650 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/config.json ...
	I0919 17:27:48.057456   34650 start.go:365] acquiring machines lock for test-preload-766296: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:27:48.057542   34650 start.go:369] acquired machines lock for "test-preload-766296" in 57.635µs
	I0919 17:27:48.057560   34650 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:27:48.057568   34650 fix.go:54] fixHost starting: 
	I0919 17:27:48.057853   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:27:48.057893   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:48.071673   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0919 17:27:48.072081   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:48.072617   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:27:48.072638   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:48.072953   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:48.073156   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:27:48.073318   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetState
	I0919 17:27:48.074887   34650 fix.go:102] recreateIfNeeded on test-preload-766296: state=Stopped err=<nil>
	I0919 17:27:48.074912   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	W0919 17:27:48.075098   34650 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:27:48.077278   34650 out.go:177] * Restarting existing kvm2 VM for "test-preload-766296" ...
	I0919 17:27:48.078921   34650 main.go:141] libmachine: (test-preload-766296) Calling .Start
	I0919 17:27:48.079077   34650 main.go:141] libmachine: (test-preload-766296) Ensuring networks are active...
	I0919 17:27:48.079806   34650 main.go:141] libmachine: (test-preload-766296) Ensuring network default is active
	I0919 17:27:48.080125   34650 main.go:141] libmachine: (test-preload-766296) Ensuring network mk-test-preload-766296 is active
	I0919 17:27:48.080441   34650 main.go:141] libmachine: (test-preload-766296) Getting domain xml...
	I0919 17:27:48.081138   34650 main.go:141] libmachine: (test-preload-766296) Creating domain...
	I0919 17:27:49.277581   34650 main.go:141] libmachine: (test-preload-766296) Waiting to get IP...
	I0919 17:27:49.278378   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:49.278779   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:49.278852   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:49.278764   34749 retry.go:31] will retry after 281.695265ms: waiting for machine to come up
	I0919 17:27:49.562436   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:49.562842   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:49.562870   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:49.562808   34749 retry.go:31] will retry after 324.495208ms: waiting for machine to come up
	I0919 17:27:49.889259   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:49.889858   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:49.889890   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:49.889802   34749 retry.go:31] will retry after 294.848587ms: waiting for machine to come up
	I0919 17:27:50.186337   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:50.186783   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:50.186815   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:50.186734   34749 retry.go:31] will retry after 416.130883ms: waiting for machine to come up
	I0919 17:27:50.604292   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:50.604725   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:50.604757   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:50.604677   34749 retry.go:31] will retry after 542.565408ms: waiting for machine to come up
	I0919 17:27:51.148370   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:51.148776   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:51.148810   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:51.148724   34749 retry.go:31] will retry after 827.558994ms: waiting for machine to come up
	I0919 17:27:51.977601   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:51.978015   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:51.978042   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:51.977964   34749 retry.go:31] will retry after 741.727666ms: waiting for machine to come up
	I0919 17:27:52.720880   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:52.721237   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:52.721266   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:52.721201   34749 retry.go:31] will retry after 1.306044396s: waiting for machine to come up
	I0919 17:27:54.029198   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:54.029635   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:54.029658   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:54.029597   34749 retry.go:31] will retry after 1.712792005s: waiting for machine to come up
	I0919 17:27:55.743753   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:55.744222   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:55.744241   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:55.744168   34749 retry.go:31] will retry after 2.259601081s: waiting for machine to come up
	I0919 17:27:58.006606   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:27:58.007051   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:27:58.007173   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:27:58.006986   34749 retry.go:31] will retry after 2.668545351s: waiting for machine to come up
	I0919 17:28:00.679415   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:00.679745   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:28:00.679769   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:28:00.679722   34749 retry.go:31] will retry after 2.590529242s: waiting for machine to come up
	I0919 17:28:03.272200   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:03.272554   34650 main.go:141] libmachine: (test-preload-766296) DBG | unable to find current IP address of domain test-preload-766296 in network mk-test-preload-766296
	I0919 17:28:03.272578   34650 main.go:141] libmachine: (test-preload-766296) DBG | I0919 17:28:03.272502   34749 retry.go:31] will retry after 3.421802821s: waiting for machine to come up
	I0919 17:28:06.698014   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.698464   34650 main.go:141] libmachine: (test-preload-766296) Found IP for machine: 192.168.39.230
	I0919 17:28:06.698495   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has current primary IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.698510   34650 main.go:141] libmachine: (test-preload-766296) Reserving static IP address...
	I0919 17:28:06.698999   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "test-preload-766296", mac: "52:54:00:6d:41:c6", ip: "192.168.39.230"} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:06.699031   34650 main.go:141] libmachine: (test-preload-766296) DBG | skip adding static IP to network mk-test-preload-766296 - found existing host DHCP lease matching {name: "test-preload-766296", mac: "52:54:00:6d:41:c6", ip: "192.168.39.230"}
	I0919 17:28:06.699049   34650 main.go:141] libmachine: (test-preload-766296) Reserved static IP address: 192.168.39.230
	I0919 17:28:06.699065   34650 main.go:141] libmachine: (test-preload-766296) Waiting for SSH to be available...
	I0919 17:28:06.699082   34650 main.go:141] libmachine: (test-preload-766296) DBG | Getting to WaitForSSH function...
	I0919 17:28:06.701450   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.701739   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:06.701773   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.701833   34650 main.go:141] libmachine: (test-preload-766296) DBG | Using SSH client type: external
	I0919 17:28:06.701856   34650 main.go:141] libmachine: (test-preload-766296) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa (-rw-------)
	I0919 17:28:06.701917   34650 main.go:141] libmachine: (test-preload-766296) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:28:06.701944   34650 main.go:141] libmachine: (test-preload-766296) DBG | About to run SSH command:
	I0919 17:28:06.701963   34650 main.go:141] libmachine: (test-preload-766296) DBG | exit 0
	I0919 17:28:06.792078   34650 main.go:141] libmachine: (test-preload-766296) DBG | SSH cmd err, output: <nil>: 
	I0919 17:28:06.792418   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetConfigRaw
	I0919 17:28:06.793006   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetIP
	I0919 17:28:06.795248   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.795564   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:06.795597   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.795803   34650 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/config.json ...
	I0919 17:28:06.795972   34650 machine.go:88] provisioning docker machine ...
	I0919 17:28:06.795990   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:06.796144   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetMachineName
	I0919 17:28:06.796277   34650 buildroot.go:166] provisioning hostname "test-preload-766296"
	I0919 17:28:06.796299   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetMachineName
	I0919 17:28:06.796429   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:06.798414   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.798706   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:06.798736   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.798857   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:06.799017   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:06.799150   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:06.799264   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:06.799396   34650 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:06.799882   34650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0919 17:28:06.799904   34650 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-766296 && echo "test-preload-766296" | sudo tee /etc/hostname
	I0919 17:28:06.928791   34650 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-766296
	
	I0919 17:28:06.928823   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:06.931404   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.931769   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:06.931808   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:06.931972   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:06.932161   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:06.932330   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:06.932474   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:06.932688   34650 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:06.932986   34650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0919 17:28:06.933005   34650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-766296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-766296/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-766296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:28:07.058426   34650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:28:07.058454   34650 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:28:07.058480   34650 buildroot.go:174] setting up certificates
	I0919 17:28:07.058523   34650 provision.go:83] configureAuth start
	I0919 17:28:07.058543   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetMachineName
	I0919 17:28:07.058890   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetIP
	I0919 17:28:07.061440   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.061783   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.061809   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.061971   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.064111   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.064359   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.064380   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.064557   34650 provision.go:138] copyHostCerts
	I0919 17:28:07.064618   34650 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:28:07.064632   34650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:28:07.064716   34650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:28:07.064822   34650 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:28:07.064833   34650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:28:07.064870   34650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:28:07.065088   34650 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:28:07.065108   34650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:28:07.065159   34650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:28:07.065230   34650 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.test-preload-766296 san=[192.168.39.230 192.168.39.230 localhost 127.0.0.1 minikube test-preload-766296]
	I0919 17:28:07.144475   34650 provision.go:172] copyRemoteCerts
	I0919 17:28:07.144549   34650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:28:07.144579   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.147273   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.147613   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.147656   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.147785   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:07.147992   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.148141   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:07.148302   34650 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa Username:docker}
	I0919 17:28:07.232974   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:28:07.259008   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0919 17:28:07.281519   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 17:28:07.303919   34650 provision.go:86] duration metric: configureAuth took 245.37892ms
	I0919 17:28:07.303945   34650 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:28:07.304164   34650 config.go:182] Loaded profile config "test-preload-766296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0919 17:28:07.304260   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.306817   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.307146   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.307181   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.307337   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:07.307512   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.307720   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.307896   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:07.308063   34650 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:07.308367   34650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0919 17:28:07.308384   34650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:28:07.606626   34650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:28:07.606658   34650 machine.go:91] provisioned docker machine in 810.672888ms
	I0919 17:28:07.606672   34650 start.go:300] post-start starting for "test-preload-766296" (driver="kvm2")
	I0919 17:28:07.606685   34650 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:28:07.606707   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:07.607012   34650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:28:07.607038   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.609914   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.610276   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.610296   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.610481   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:07.610682   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.610824   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:07.610952   34650 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa Username:docker}
	I0919 17:28:07.704518   34650 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:28:07.709125   34650 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:28:07.709147   34650 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:28:07.709220   34650 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:28:07.709315   34650 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:28:07.709428   34650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:28:07.720222   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:28:07.744079   34650 start.go:303] post-start completed in 137.39509ms
	I0919 17:28:07.744100   34650 fix.go:56] fixHost completed within 19.686530548s
	I0919 17:28:07.744124   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.746742   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.747101   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.747131   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.747297   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:07.747460   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.747616   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.747784   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:07.747937   34650 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:07.748234   34650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0919 17:28:07.748245   34650 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:28:07.865513   34650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695144487.815755290
	
	I0919 17:28:07.865536   34650 fix.go:206] guest clock: 1695144487.815755290
	I0919 17:28:07.865545   34650 fix.go:219] Guest: 2023-09-19 17:28:07.81575529 +0000 UTC Remote: 2023-09-19 17:28:07.744104528 +0000 UTC m=+43.883865973 (delta=71.650762ms)
	I0919 17:28:07.865564   34650 fix.go:190] guest clock delta is within tolerance: 71.650762ms
	I0919 17:28:07.865571   34650 start.go:83] releasing machines lock for "test-preload-766296", held for 19.808015702s
	I0919 17:28:07.865596   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:07.865828   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetIP
	I0919 17:28:07.868443   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.868792   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.868824   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.868927   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:07.869396   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:07.869577   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:07.869676   34650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:28:07.869717   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.869797   34650 ssh_runner.go:195] Run: cat /version.json
	I0919 17:28:07.869815   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:07.872289   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.872485   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.872630   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.872680   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.872752   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:07.872851   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:07.872872   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:07.872910   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.873070   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:07.873085   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:07.873240   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:07.873275   34650 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa Username:docker}
	I0919 17:28:07.873346   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:07.873554   34650 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa Username:docker}
	I0919 17:28:07.987496   34650 ssh_runner.go:195] Run: systemctl --version
	I0919 17:28:07.993205   34650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:28:08.133681   34650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:28:08.139555   34650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:28:08.139629   34650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:28:08.153732   34650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:28:08.153754   34650 start.go:469] detecting cgroup driver to use...
	I0919 17:28:08.153875   34650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:28:08.167628   34650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:28:08.179460   34650 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:28:08.179529   34650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:28:08.191471   34650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:28:08.203629   34650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:28:08.306757   34650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:28:08.428546   34650 docker.go:212] disabling docker service ...
	I0919 17:28:08.428608   34650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:28:08.442117   34650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:28:08.453704   34650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:28:08.561784   34650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:28:08.683532   34650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:28:08.696166   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:28:08.713082   34650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0919 17:28:08.713147   34650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:28:08.722210   34650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:28:08.722265   34650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:28:08.731166   34650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:28:08.740118   34650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:28:08.748970   34650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:28:08.758285   34650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:28:08.766349   34650 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:28:08.766413   34650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:28:08.779159   34650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:28:08.787153   34650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:08.908925   34650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:28:09.088132   34650 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:28:09.088204   34650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:28:09.093047   34650 start.go:537] Will wait 60s for crictl version
	I0919 17:28:09.093097   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:09.096664   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:28:09.131783   34650 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:28:09.131849   34650 ssh_runner.go:195] Run: crio --version
	I0919 17:28:09.176524   34650 ssh_runner.go:195] Run: crio --version
	I0919 17:28:09.223838   34650 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0919 17:28:09.225241   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetIP
	I0919 17:28:09.227719   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:09.228042   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:09.228075   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:09.228255   34650 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 17:28:09.232103   34650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:28:09.246082   34650 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0919 17:28:09.246124   34650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:28:09.284692   34650 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0919 17:28:09.284752   34650 ssh_runner.go:195] Run: which lz4
	I0919 17:28:09.288584   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:28:09.292526   34650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:28:09.292557   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0919 17:28:11.069069   34650 crio.go:444] Took 1.780509 seconds to copy over tarball
	I0919 17:28:11.069136   34650 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:28:14.110914   34650 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.04175543s)
	I0919 17:28:14.110937   34650 crio.go:451] Took 3.041851 seconds to extract the tarball
	I0919 17:28:14.110955   34650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:28:14.150769   34650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:28:14.202799   34650 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0919 17:28:14.202824   34650 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:28:14.202902   34650 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:14.202929   34650 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0919 17:28:14.202940   34650 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 17:28:14.202972   34650 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 17:28:14.202990   34650 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 17:28:14.202906   34650 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 17:28:14.202929   34650 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 17:28:14.202911   34650 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0919 17:28:14.204227   34650 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 17:28:14.204246   34650 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:14.204232   34650 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0919 17:28:14.204228   34650 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 17:28:14.204285   34650 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 17:28:14.204290   34650 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 17:28:14.204325   34650 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0919 17:28:14.204432   34650 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 17:28:14.365114   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0919 17:28:14.378705   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0919 17:28:14.383494   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0919 17:28:14.392814   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0919 17:28:14.433901   34650 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0919 17:28:14.433947   34650 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0919 17:28:14.433991   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.460616   34650 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0919 17:28:14.460652   34650 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0919 17:28:14.460689   34650 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0919 17:28:14.460699   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.460717   34650 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0919 17:28:14.460765   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.466943   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 17:28:14.473018   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0919 17:28:14.475766   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0919 17:28:14.513404   34650 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0919 17:28:14.513441   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0919 17:28:14.513515   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0919 17:28:14.513450   34650 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0919 17:28:14.513583   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0919 17:28:14.513598   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.565624   34650 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0919 17:28:14.565664   34650 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 17:28:14.565707   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.610234   34650 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0919 17:28:14.610274   34650 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0919 17:28:14.610324   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0919 17:28:14.610327   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.610429   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0919 17:28:14.611915   34650 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0919 17:28:14.611947   34650 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0919 17:28:14.611990   34650 ssh_runner.go:195] Run: which crictl
	I0919 17:28:14.639855   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0919 17:28:14.639954   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0919 17:28:14.650681   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0919 17:28:14.650729   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0919 17:28:14.650758   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0919 17:28:14.650775   34650 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0919 17:28:14.650810   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0919 17:28:14.650836   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0919 17:28:14.650765   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0919 17:28:14.650859   34650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0919 17:28:14.650899   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0919 17:28:14.650941   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0919 17:28:15.119613   34650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:16.982102   34650 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.331388605s)
	I0919 17:28:16.982139   34650 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.331303091s)
	I0919 17:28:16.982155   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0919 17:28:16.982161   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0919 17:28:16.982183   34650 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0919 17:28:16.982218   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0919 17:28:16.982246   34650 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.331483753s)
	I0919 17:28:16.982270   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0919 17:28:16.982282   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0919 17:28:16.982301   34650 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.331444288s)
	I0919 17:28:16.982336   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0919 17:28:16.982357   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0919 17:28:16.982368   34650 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (2.331464978s)
	I0919 17:28:16.982397   34650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0919 17:28:16.982405   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0919 17:28:16.982420   34650 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.331460907s)
	I0919 17:28:16.982442   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0919 17:28:16.982461   34650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0919 17:28:16.982481   34650 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.862829607s)
	I0919 17:28:19.346117   34650 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.363876896s)
	I0919 17:28:19.346151   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0919 17:28:19.346175   34650 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0919 17:28:19.346223   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0919 17:28:19.346245   34650 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.363957957s)
	I0919 17:28:19.346278   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0919 17:28:19.346320   34650 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.363943274s)
	I0919 17:28:19.346349   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0919 17:28:19.346351   34650 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.363875714s)
	I0919 17:28:19.346365   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0919 17:28:19.346391   34650 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.363974647s)
	I0919 17:28:19.346406   34650 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0919 17:28:19.488377   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0919 17:28:19.488438   34650 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0919 17:28:19.488501   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0919 17:28:20.235530   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0919 17:28:20.235583   34650 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0919 17:28:20.235646   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0919 17:28:20.987042   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0919 17:28:20.987079   34650 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0919 17:28:20.987131   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0919 17:28:21.834882   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0919 17:28:21.834927   34650 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0919 17:28:21.834976   34650 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0919 17:28:22.279873   34650 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0919 17:28:22.279910   34650 cache_images.go:123] Successfully loaded all cached images
	I0919 17:28:22.279914   34650 cache_images.go:92] LoadImages completed in 8.077077947s
	I0919 17:28:22.279994   34650 ssh_runner.go:195] Run: crio config
	I0919 17:28:22.334336   34650 cni.go:84] Creating CNI manager for ""
	I0919 17:28:22.334360   34650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:28:22.334381   34650 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:28:22.334402   34650 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-766296 NodeName:test-preload-766296 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:28:22.334542   34650 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-766296"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:28:22.334634   34650 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-766296 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-766296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:28:22.334691   34650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0919 17:28:22.343275   34650 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:28:22.343341   34650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:28:22.351168   34650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0919 17:28:22.366677   34650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:28:22.381862   34650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0919 17:28:22.397585   34650 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0919 17:28:22.401239   34650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:28:22.413368   34650 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296 for IP: 192.168.39.230
	I0919 17:28:22.413400   34650 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:22.413557   34650 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:28:22.413611   34650 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:28:22.413706   34650 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.key
	I0919 17:28:22.413787   34650 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/apiserver.key.165d62e4
	I0919 17:28:22.413847   34650 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/proxy-client.key
	I0919 17:28:22.414004   34650 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:28:22.414066   34650 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:28:22.414082   34650 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:28:22.414119   34650 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:28:22.414153   34650 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:28:22.414189   34650 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:28:22.414255   34650 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:28:22.414938   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:28:22.437604   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 17:28:22.459917   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:28:22.482078   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:28:22.503936   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:28:22.526116   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:28:22.548179   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:28:22.570115   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:28:22.592041   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:28:22.613885   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:28:22.635394   34650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:28:22.658062   34650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:28:22.673428   34650 ssh_runner.go:195] Run: openssl version
	I0919 17:28:22.678621   34650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:28:22.687225   34650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:22.691679   34650 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:22.691721   34650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:22.697030   34650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:28:22.705869   34650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:28:22.714532   34650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:28:22.718729   34650 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:28:22.718771   34650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:28:22.723987   34650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:28:22.732901   34650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:28:22.741967   34650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:28:22.746215   34650 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:28:22.746258   34650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:28:22.751322   34650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:28:22.760120   34650 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:28:22.764493   34650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:28:22.769804   34650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:28:22.775100   34650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:28:22.780476   34650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:28:22.785693   34650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:28:22.791107   34650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:28:22.796580   34650 kubeadm.go:404] StartCluster: {Name:test-preload-766296 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-766296 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:28:22.796654   34650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:28:22.796697   34650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:28:22.831358   34650 cri.go:89] found id: ""
	I0919 17:28:22.831416   34650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:28:22.840040   34650 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:28:22.840056   34650 kubeadm.go:636] restartCluster start
	I0919 17:28:22.840094   34650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:28:22.848022   34650 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:22.848387   34650 kubeconfig.go:135] verify returned: extract IP: "test-preload-766296" does not appear in /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:28:22.848510   34650 kubeconfig.go:146] "test-preload-766296" context is missing from /home/jenkins/minikube-integration/17240-6042/kubeconfig - will repair!
	I0919 17:28:22.848811   34650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:22.849355   34650 kapi.go:59] client config for test-preload-766296: &rest.Config{Host:"https://192.168.39.230:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:28:22.850029   34650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:28:22.857609   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:22.857642   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:22.868143   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:22.868155   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:22.868178   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:22.877510   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:23.378556   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:23.378627   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:23.389561   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:23.878246   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:23.878302   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:23.889437   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:24.378434   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:24.378539   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:24.389459   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:24.878026   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:24.878107   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:24.889623   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:25.378300   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:25.378398   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:25.389484   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:25.878037   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:25.878111   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:25.889281   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:26.378545   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:26.378625   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:26.389654   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:26.878277   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:26.878353   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:26.889478   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:27.377980   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:27.378044   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:27.388829   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:27.878553   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:27.878634   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:27.890889   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:28.377821   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:28.377887   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:28.388627   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:28.878272   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:28.878367   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:28.890101   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:29.377565   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:29.377635   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:29.389236   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:29.877794   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:29.877875   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:29.890607   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:30.378181   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:30.378268   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:30.389401   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:30.877946   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:30.878028   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:30.889434   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:31.378022   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:31.378123   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:31.389430   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:31.877974   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:31.878092   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:31.892063   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:32.377631   34650 api_server.go:166] Checking apiserver status ...
	I0919 17:28:32.377733   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:32.388931   34650 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:32.857667   34650 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:28:32.857696   34650 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:28:32.857707   34650 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 17:28:32.857764   34650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:28:32.894969   34650 cri.go:89] found id: ""
	I0919 17:28:32.895042   34650 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:28:32.909646   34650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:28:32.918252   34650 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:28:32.918311   34650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:28:32.926692   34650 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:28:32.926717   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:33.041315   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:33.584954   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:33.925081   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:34.027976   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:34.163697   34650 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:28:34.163771   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:34.179604   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:34.693585   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:35.193501   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:35.694004   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:36.193375   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:36.220960   34650 api_server.go:72] duration metric: took 2.057256519s to wait for apiserver process to appear ...
	I0919 17:28:36.220987   34650 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:28:36.221004   34650 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0919 17:28:41.222269   34650 api_server.go:269] stopped: https://192.168.39.230:8443/healthz: Get "https://192.168.39.230:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 17:28:41.222307   34650 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0919 17:28:41.241798   34650 api_server.go:279] https://192.168.39.230:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:28:41.241831   34650 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:28:41.742558   34650 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0919 17:28:41.749044   34650 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 17:28:41.749098   34650 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 17:28:42.242654   34650 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0919 17:28:42.252273   34650 api_server.go:279] https://192.168.39.230:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 17:28:42.252307   34650 api_server.go:103] status: https://192.168.39.230:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 17:28:42.742918   34650 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0919 17:28:42.753511   34650 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0919 17:28:42.760419   34650 api_server.go:141] control plane version: v1.24.4
	I0919 17:28:42.760449   34650 api_server.go:131] duration metric: took 6.539453957s to wait for apiserver health ...
	I0919 17:28:42.760459   34650 cni.go:84] Creating CNI manager for ""
	I0919 17:28:42.760464   34650 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:28:42.762177   34650 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:28:42.763562   34650 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:28:42.772843   34650 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:28:42.791874   34650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:28:42.807811   34650 system_pods.go:59] 8 kube-system pods found
	I0919 17:28:42.807845   34650 system_pods.go:61] "coredns-6d4b75cb6d-4ppgb" [e8e3eb1c-6b52-449c-b48d-d7be312c74c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:28:42.807854   34650 system_pods.go:61] "coredns-6d4b75cb6d-6x8tf" [d7e8bbed-dbdf-49cd-b38f-885a0eec1682] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:28:42.807861   34650 system_pods.go:61] "etcd-test-preload-766296" [99f76987-b1cc-4eef-8aae-9e6cfc050dc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 17:28:42.807867   34650 system_pods.go:61] "kube-apiserver-test-preload-766296" [66d20a3e-ff6e-41a2-9672-b772edc4de37] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 17:28:42.807877   34650 system_pods.go:61] "kube-controller-manager-test-preload-766296" [8ec02af8-7b75-4032-b552-befa472e2cca] Running
	I0919 17:28:42.807881   34650 system_pods.go:61] "kube-proxy-27fmr" [f42e87a7-ec92-4d35-b7aa-939cecca949a] Running
	I0919 17:28:42.807885   34650 system_pods.go:61] "kube-scheduler-test-preload-766296" [6bb999a7-5520-467a-94bb-452408703fd3] Running
	I0919 17:28:42.807889   34650 system_pods.go:61] "storage-provisioner" [feb83a14-2729-4ae3-ac6c-dbfb3563c3f0] Running
	I0919 17:28:42.807895   34650 system_pods.go:74] duration metric: took 15.991153ms to wait for pod list to return data ...
	I0919 17:28:42.807902   34650 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:28:42.811981   34650 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:28:42.812015   34650 node_conditions.go:123] node cpu capacity is 2
	I0919 17:28:42.812026   34650 node_conditions.go:105] duration metric: took 4.119895ms to run NodePressure ...
	I0919 17:28:42.812048   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:43.124207   34650 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:28:43.129286   34650 kubeadm.go:787] kubelet initialised
	I0919 17:28:43.129307   34650 kubeadm.go:788] duration metric: took 5.0755ms waiting for restarted kubelet to initialise ...
	I0919 17:28:43.129314   34650 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:43.134695   34650 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-4ppgb" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:43.139532   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "coredns-6d4b75cb6d-4ppgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.139558   34650 pod_ready.go:81] duration metric: took 4.835097ms waiting for pod "coredns-6d4b75cb6d-4ppgb" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:43.139568   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "coredns-6d4b75cb6d-4ppgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.139579   34650 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:43.147577   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.147599   34650 pod_ready.go:81] duration metric: took 8.011846ms waiting for pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:43.147608   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.147616   34650 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:43.151760   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "etcd-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.151785   34650 pod_ready.go:81] duration metric: took 4.154387ms waiting for pod "etcd-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:43.151794   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "etcd-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.151801   34650 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:43.196667   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "kube-apiserver-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.196695   34650 pod_ready.go:81] duration metric: took 44.886649ms waiting for pod "kube-apiserver-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:43.196705   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "kube-apiserver-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.196713   34650 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:43.601030   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.601054   34650 pod_ready.go:81] duration metric: took 404.334035ms waiting for pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:43.601064   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.601073   34650 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-27fmr" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:43.995551   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "kube-proxy-27fmr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.995578   34650 pod_ready.go:81] duration metric: took 394.499561ms waiting for pod "kube-proxy-27fmr" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:43.995588   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "kube-proxy-27fmr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:43.995594   34650 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:44.396043   34650 pod_ready.go:97] node "test-preload-766296" hosting pod "kube-scheduler-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:44.396067   34650 pod_ready.go:81] duration metric: took 400.466577ms waiting for pod "kube-scheduler-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:44.396079   34650 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-766296" hosting pod "kube-scheduler-test-preload-766296" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:44.396090   34650 pod_ready.go:38] duration metric: took 1.266765665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:44.396111   34650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:28:44.407614   34650 ops.go:34] apiserver oom_adj: -16
	I0919 17:28:44.407635   34650 kubeadm.go:640] restartCluster took 21.567573992s
	I0919 17:28:44.407645   34650 kubeadm.go:406] StartCluster complete in 21.611082964s
	I0919 17:28:44.407659   34650 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:44.407726   34650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:28:44.408354   34650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:44.408577   34650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:28:44.408720   34650 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:28:44.408810   34650 addons.go:69] Setting storage-provisioner=true in profile "test-preload-766296"
	I0919 17:28:44.408818   34650 config.go:182] Loaded profile config "test-preload-766296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0919 17:28:44.408826   34650 addons.go:69] Setting default-storageclass=true in profile "test-preload-766296"
	I0919 17:28:44.408835   34650 addons.go:231] Setting addon storage-provisioner=true in "test-preload-766296"
	W0919 17:28:44.408844   34650 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:28:44.408850   34650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-766296"
	I0919 17:28:44.408888   34650 host.go:66] Checking if "test-preload-766296" exists ...
	I0919 17:28:44.409224   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:28:44.409267   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:44.409234   34650 kapi.go:59] client config for test-preload-766296: &rest.Config{Host:"https://192.168.39.230:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:28:44.409310   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:28:44.409345   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:44.413015   34650 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-766296" context rescaled to 1 replicas
	I0919 17:28:44.413067   34650 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:28:44.414877   34650 out.go:177] * Verifying Kubernetes components...
	I0919 17:28:44.416330   34650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:28:44.424315   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0919 17:28:44.424435   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0919 17:28:44.424771   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:44.424817   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:44.425247   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:28:44.425267   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:44.425252   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:28:44.425325   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:44.425628   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:44.425638   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:44.425815   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetState
	I0919 17:28:44.426185   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:28:44.426228   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:44.428153   34650 kapi.go:59] client config for test-preload-766296: &rest.Config{Host:"https://192.168.39.230:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/profiles/test-preload-766296/client.key", CAFile:"/home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 17:28:44.436931   34650 addons.go:231] Setting addon default-storageclass=true in "test-preload-766296"
	W0919 17:28:44.436951   34650 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:28:44.436970   34650 host.go:66] Checking if "test-preload-766296" exists ...
	I0919 17:28:44.437214   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:28:44.437248   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:44.441441   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I0919 17:28:44.441825   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:44.442287   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:28:44.442315   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:44.442647   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:44.442875   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetState
	I0919 17:28:44.444593   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:44.446629   34650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:44.448120   34650 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:28:44.448139   34650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:28:44.448159   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:44.451139   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:44.451541   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:44.451568   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:44.451702   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:44.451917   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:44.452124   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:44.452258   34650 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa Username:docker}
	I0919 17:28:44.453425   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
	I0919 17:28:44.453807   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:44.454233   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:28:44.454256   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:44.454524   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:44.454956   34650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:28:44.454999   34650 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:44.468991   34650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0919 17:28:44.469347   34650 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:44.469840   34650 main.go:141] libmachine: Using API Version  1
	I0919 17:28:44.469869   34650 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:44.470198   34650 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:44.470383   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetState
	I0919 17:28:44.472162   34650 main.go:141] libmachine: (test-preload-766296) Calling .DriverName
	I0919 17:28:44.472393   34650 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:28:44.472424   34650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:28:44.472445   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHHostname
	I0919 17:28:44.475446   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:44.475910   34650 main.go:141] libmachine: (test-preload-766296) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:41:c6", ip: ""} in network mk-test-preload-766296: {Iface:virbr1 ExpiryTime:2023-09-19 18:28:00 +0000 UTC Type:0 Mac:52:54:00:6d:41:c6 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:test-preload-766296 Clientid:01:52:54:00:6d:41:c6}
	I0919 17:28:44.475942   34650 main.go:141] libmachine: (test-preload-766296) DBG | domain test-preload-766296 has defined IP address 192.168.39.230 and MAC address 52:54:00:6d:41:c6 in network mk-test-preload-766296
	I0919 17:28:44.476191   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHPort
	I0919 17:28:44.476375   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHKeyPath
	I0919 17:28:44.476547   34650 main.go:141] libmachine: (test-preload-766296) Calling .GetSSHUsername
	I0919 17:28:44.476704   34650 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/test-preload-766296/id_rsa Username:docker}
	I0919 17:28:44.603952   34650 node_ready.go:35] waiting up to 6m0s for node "test-preload-766296" to be "Ready" ...
	I0919 17:28:44.604219   34650 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:28:44.608084   34650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:28:44.621653   34650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:28:45.454042   34650 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:45.454066   34650 main.go:141] libmachine: (test-preload-766296) Calling .Close
	I0919 17:28:45.454111   34650 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:45.454135   34650 main.go:141] libmachine: (test-preload-766296) Calling .Close
	I0919 17:28:45.454373   34650 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:45.454392   34650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:45.454418   34650 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:45.454432   34650 main.go:141] libmachine: (test-preload-766296) Calling .Close
	I0919 17:28:45.454479   34650 main.go:141] libmachine: (test-preload-766296) DBG | Closing plugin on server side
	I0919 17:28:45.454487   34650 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:45.454501   34650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:45.454525   34650 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:45.454534   34650 main.go:141] libmachine: (test-preload-766296) Calling .Close
	I0919 17:28:45.454630   34650 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:45.454661   34650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:45.454783   34650 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:45.454807   34650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:45.454823   34650 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:45.454825   34650 main.go:141] libmachine: (test-preload-766296) DBG | Closing plugin on server side
	I0919 17:28:45.454836   34650 main.go:141] libmachine: (test-preload-766296) Calling .Close
	I0919 17:28:45.455028   34650 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:45.455045   34650 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:45.455055   34650 main.go:141] libmachine: (test-preload-766296) DBG | Closing plugin on server side
	I0919 17:28:45.457079   34650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 17:28:45.458553   34650 addons.go:502] enable addons completed in 1.049843301s: enabled=[storage-provisioner default-storageclass]
	I0919 17:28:46.799855   34650 node_ready.go:58] node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:49.301653   34650 node_ready.go:58] node "test-preload-766296" has status "Ready":"False"
	I0919 17:28:51.798834   34650 node_ready.go:49] node "test-preload-766296" has status "Ready":"True"
	I0919 17:28:51.798858   34650 node_ready.go:38] duration metric: took 7.194882058s waiting for node "test-preload-766296" to be "Ready" ...
	I0919 17:28:51.798867   34650 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:51.803728   34650 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:51.808757   34650 pod_ready.go:92] pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:51.808783   34650 pod_ready.go:81] duration metric: took 5.023182ms waiting for pod "coredns-6d4b75cb6d-6x8tf" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:51.808793   34650 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:53.824830   34650 pod_ready.go:102] pod "etcd-test-preload-766296" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:55.327372   34650 pod_ready.go:92] pod "etcd-test-preload-766296" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:55.327391   34650 pod_ready.go:81] duration metric: took 3.518592924s waiting for pod "etcd-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.327400   34650 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.331609   34650 pod_ready.go:92] pod "kube-apiserver-test-preload-766296" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:55.331628   34650 pod_ready.go:81] duration metric: took 4.219656ms waiting for pod "kube-apiserver-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.331636   34650 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.342929   34650 pod_ready.go:92] pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:55.342944   34650 pod_ready.go:81] duration metric: took 11.302822ms waiting for pod "kube-controller-manager-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.342952   34650 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-27fmr" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.398228   34650 pod_ready.go:92] pod "kube-proxy-27fmr" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:55.398250   34650 pod_ready.go:81] duration metric: took 55.289194ms waiting for pod "kube-proxy-27fmr" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.398258   34650 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.798754   34650 pod_ready.go:92] pod "kube-scheduler-test-preload-766296" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:55.798780   34650 pod_ready.go:81] duration metric: took 400.513106ms waiting for pod "kube-scheduler-test-preload-766296" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:55.798789   34650 pod_ready.go:38] duration metric: took 3.999914255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:55.798804   34650 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:28:55.798847   34650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:55.811787   34650 api_server.go:72] duration metric: took 11.398683716s to wait for apiserver process to appear ...
	I0919 17:28:55.811806   34650 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:28:55.811818   34650 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0919 17:28:55.817102   34650 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0919 17:28:55.817883   34650 api_server.go:141] control plane version: v1.24.4
	I0919 17:28:55.817897   34650 api_server.go:131] duration metric: took 6.086237ms to wait for apiserver health ...
	I0919 17:28:55.817903   34650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:28:56.001282   34650 system_pods.go:59] 7 kube-system pods found
	I0919 17:28:56.001305   34650 system_pods.go:61] "coredns-6d4b75cb6d-6x8tf" [d7e8bbed-dbdf-49cd-b38f-885a0eec1682] Running
	I0919 17:28:56.001310   34650 system_pods.go:61] "etcd-test-preload-766296" [99f76987-b1cc-4eef-8aae-9e6cfc050dc1] Running
	I0919 17:28:56.001314   34650 system_pods.go:61] "kube-apiserver-test-preload-766296" [66d20a3e-ff6e-41a2-9672-b772edc4de37] Running
	I0919 17:28:56.001318   34650 system_pods.go:61] "kube-controller-manager-test-preload-766296" [8ec02af8-7b75-4032-b552-befa472e2cca] Running
	I0919 17:28:56.001322   34650 system_pods.go:61] "kube-proxy-27fmr" [f42e87a7-ec92-4d35-b7aa-939cecca949a] Running
	I0919 17:28:56.001326   34650 system_pods.go:61] "kube-scheduler-test-preload-766296" [6bb999a7-5520-467a-94bb-452408703fd3] Running
	I0919 17:28:56.001330   34650 system_pods.go:61] "storage-provisioner" [feb83a14-2729-4ae3-ac6c-dbfb3563c3f0] Running
	I0919 17:28:56.001335   34650 system_pods.go:74] duration metric: took 183.4271ms to wait for pod list to return data ...
	I0919 17:28:56.001342   34650 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:28:56.198688   34650 default_sa.go:45] found service account: "default"
	I0919 17:28:56.198713   34650 default_sa.go:55] duration metric: took 197.366081ms for default service account to be created ...
	I0919 17:28:56.198723   34650 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:28:56.401542   34650 system_pods.go:86] 7 kube-system pods found
	I0919 17:28:56.401566   34650 system_pods.go:89] "coredns-6d4b75cb6d-6x8tf" [d7e8bbed-dbdf-49cd-b38f-885a0eec1682] Running
	I0919 17:28:56.401571   34650 system_pods.go:89] "etcd-test-preload-766296" [99f76987-b1cc-4eef-8aae-9e6cfc050dc1] Running
	I0919 17:28:56.401575   34650 system_pods.go:89] "kube-apiserver-test-preload-766296" [66d20a3e-ff6e-41a2-9672-b772edc4de37] Running
	I0919 17:28:56.401579   34650 system_pods.go:89] "kube-controller-manager-test-preload-766296" [8ec02af8-7b75-4032-b552-befa472e2cca] Running
	I0919 17:28:56.401582   34650 system_pods.go:89] "kube-proxy-27fmr" [f42e87a7-ec92-4d35-b7aa-939cecca949a] Running
	I0919 17:28:56.401586   34650 system_pods.go:89] "kube-scheduler-test-preload-766296" [6bb999a7-5520-467a-94bb-452408703fd3] Running
	I0919 17:28:56.401590   34650 system_pods.go:89] "storage-provisioner" [feb83a14-2729-4ae3-ac6c-dbfb3563c3f0] Running
	I0919 17:28:56.401596   34650 system_pods.go:126] duration metric: took 202.868187ms to wait for k8s-apps to be running ...
	I0919 17:28:56.401602   34650 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:28:56.401640   34650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:28:56.415228   34650 system_svc.go:56] duration metric: took 13.616382ms WaitForService to wait for kubelet.
	I0919 17:28:56.415256   34650 kubeadm.go:581] duration metric: took 12.002159179s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:28:56.415273   34650 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:28:56.599557   34650 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:28:56.599583   34650 node_conditions.go:123] node cpu capacity is 2
	I0919 17:28:56.599593   34650 node_conditions.go:105] duration metric: took 184.31599ms to run NodePressure ...
	I0919 17:28:56.599603   34650 start.go:228] waiting for startup goroutines ...
	I0919 17:28:56.599612   34650 start.go:233] waiting for cluster config update ...
	I0919 17:28:56.599621   34650 start.go:242] writing updated cluster config ...
	I0919 17:28:56.599872   34650 ssh_runner.go:195] Run: rm -f paused
	I0919 17:28:56.643950   34650 start.go:600] kubectl: 1.28.2, cluster: 1.24.4 (minor skew: 4)
	I0919 17:28:56.645901   34650 out.go:177] 
	W0919 17:28:56.647306   34650 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0919 17:28:56.648672   34650 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0919 17:28:56.650088   34650 out.go:177] * Done! kubectl is now configured to use "test-preload-766296" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:27:59 UTC, ends at Tue 2023-09-19 17:28:57 UTC. --
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.527251089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144537527236726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=72885658-af9d-4fb4-8629-1e8f0315d0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.528582578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=36cea0d6-c3f6-4b95-96e2-89e6856c0169 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.528632316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=36cea0d6-c3f6-4b95-96e2-89e6856c0169 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.528806555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3783d6fe4bb43e1b3d2fbf6531c40e0665859716976c50140e8ae5a39e822dd,PodSandboxId:65e790f70effdb6c6087c87d65b676573bead41f327f41a15721aec0965998ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1695144526590772146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6x8tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e8bbed-dbdf-49cd-b38f-885a0eec1682,},Annotations:map[string]string{io.kubernetes.container.hash: 48a28864,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35efae451f443d0c60b4d6ea71160c59f71468b3445cfcff6887ef980e9d556f,PodSandboxId:45713cd6b25de74c4907c40326c670f0c4d7c2e490aa3e68fce148f7ab9617d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1695144523687151556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27fmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f42e87a7-ec92-4d35-b7aa-939cecca949a,},Annotations:map[string]string{io.kubernetes.container.hash: e41a06f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2ecf2e9714dddd7c75217dbbe1cf91d0077c552c757ac0261af7a9ebff9f60,PodSandboxId:92e3522c5cb911ffc6f1f4292cda6f31525432b2f6f56c4183d05af4e587d13d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695144523586357217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
feb83a14-2729-4ae3-ac6c-dbfb3563c3f0,},Annotations:map[string]string{io.kubernetes.container.hash: 801d663e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27620caebc3f3d73c29e2f92f539aa2cac6fdc93e0b9de2ed95f0c48539d18d2,PodSandboxId:a039025870a1875479e30b187667b4598e1aa0381a9dbd9612f543f4d6076a0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1695144515752854203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72bac6c574fd399be2b7b8bdc938a75,},Annotations:map
[string]string{io.kubernetes.container.hash: 90ffb131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292c96b4b71e4d4fe933be253a4dcb966d8f96f4da467e9d2bbd694bf0e1468b,PodSandboxId:08ff67da0ecfe4aca026b15230db5cf1b90a72c53344c213310d46386eddacc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1695144515488716784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d9f9ccdf5495d57c7770f4774e5a7d,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e97d4f494d91a5f5bf715122fdc50b432812bf68c6876e7b3ecc5bc0002e3e,PodSandboxId:b3d6bdbedbffb36d991c46285f4c6f86ca5431e07a094814c8feb09a1bd2a0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1695144515441262313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719b9cd4fd0c5c8b68d1df5700335162,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb5e6020a2c5ffa8f5b94d318ab91829ae2b0b4a5e62176a4b38ceee432de4,PodSandboxId:7b0a987201fc7be45222f39b491b3b969caa6cfa71a26e66b47326dc38eb9259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1695144515113822563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11096ac9c1e182f94956db99b3726ad6,},Annotations:map[strin
g]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=36cea0d6-c3f6-4b95-96e2-89e6856c0169 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.567794809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a7cd7c79-8412-4705-bc8a-f74219882880 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.567851178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a7cd7c79-8412-4705-bc8a-f74219882880 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.569033458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b3b1bac0-4297-4b7d-acf5-a243e199329b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.569574413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144537569557757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=b3b1bac0-4297-4b7d-acf5-a243e199329b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.570226816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eac7563f-69c2-4bd2-b10e-5e25be86beaa name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.570297430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eac7563f-69c2-4bd2-b10e-5e25be86beaa name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.570882189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3783d6fe4bb43e1b3d2fbf6531c40e0665859716976c50140e8ae5a39e822dd,PodSandboxId:65e790f70effdb6c6087c87d65b676573bead41f327f41a15721aec0965998ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1695144526590772146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6x8tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e8bbed-dbdf-49cd-b38f-885a0eec1682,},Annotations:map[string]string{io.kubernetes.container.hash: 48a28864,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35efae451f443d0c60b4d6ea71160c59f71468b3445cfcff6887ef980e9d556f,PodSandboxId:45713cd6b25de74c4907c40326c670f0c4d7c2e490aa3e68fce148f7ab9617d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1695144523687151556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27fmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f42e87a7-ec92-4d35-b7aa-939cecca949a,},Annotations:map[string]string{io.kubernetes.container.hash: e41a06f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2ecf2e9714dddd7c75217dbbe1cf91d0077c552c757ac0261af7a9ebff9f60,PodSandboxId:92e3522c5cb911ffc6f1f4292cda6f31525432b2f6f56c4183d05af4e587d13d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695144523586357217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
feb83a14-2729-4ae3-ac6c-dbfb3563c3f0,},Annotations:map[string]string{io.kubernetes.container.hash: 801d663e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27620caebc3f3d73c29e2f92f539aa2cac6fdc93e0b9de2ed95f0c48539d18d2,PodSandboxId:a039025870a1875479e30b187667b4598e1aa0381a9dbd9612f543f4d6076a0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1695144515752854203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72bac6c574fd399be2b7b8bdc938a75,},Annotations:map
[string]string{io.kubernetes.container.hash: 90ffb131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292c96b4b71e4d4fe933be253a4dcb966d8f96f4da467e9d2bbd694bf0e1468b,PodSandboxId:08ff67da0ecfe4aca026b15230db5cf1b90a72c53344c213310d46386eddacc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1695144515488716784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d9f9ccdf5495d57c7770f4774e5a7d,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e97d4f494d91a5f5bf715122fdc50b432812bf68c6876e7b3ecc5bc0002e3e,PodSandboxId:b3d6bdbedbffb36d991c46285f4c6f86ca5431e07a094814c8feb09a1bd2a0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1695144515441262313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719b9cd4fd0c5c8b68d1df5700335162,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb5e6020a2c5ffa8f5b94d318ab91829ae2b0b4a5e62176a4b38ceee432de4,PodSandboxId:7b0a987201fc7be45222f39b491b3b969caa6cfa71a26e66b47326dc38eb9259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1695144515113822563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11096ac9c1e182f94956db99b3726ad6,},Annotations:map[strin
g]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eac7563f-69c2-4bd2-b10e-5e25be86beaa name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.612113533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1af2c367-85b2-4525-8ce5-5350c7b717c0 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.612169438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1af2c367-85b2-4525-8ce5-5350c7b717c0 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.613129276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=db162151-b887-40f1-b5ef-cfdd57bc7b2f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.613641351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144537613626193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=db162151-b887-40f1-b5ef-cfdd57bc7b2f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.614161768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=04934396-5abc-4c74-abe6-6aa399bca6b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.614212008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=04934396-5abc-4c74-abe6-6aa399bca6b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.614361587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3783d6fe4bb43e1b3d2fbf6531c40e0665859716976c50140e8ae5a39e822dd,PodSandboxId:65e790f70effdb6c6087c87d65b676573bead41f327f41a15721aec0965998ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1695144526590772146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6x8tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e8bbed-dbdf-49cd-b38f-885a0eec1682,},Annotations:map[string]string{io.kubernetes.container.hash: 48a28864,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35efae451f443d0c60b4d6ea71160c59f71468b3445cfcff6887ef980e9d556f,PodSandboxId:45713cd6b25de74c4907c40326c670f0c4d7c2e490aa3e68fce148f7ab9617d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1695144523687151556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27fmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f42e87a7-ec92-4d35-b7aa-939cecca949a,},Annotations:map[string]string{io.kubernetes.container.hash: e41a06f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2ecf2e9714dddd7c75217dbbe1cf91d0077c552c757ac0261af7a9ebff9f60,PodSandboxId:92e3522c5cb911ffc6f1f4292cda6f31525432b2f6f56c4183d05af4e587d13d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695144523586357217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
feb83a14-2729-4ae3-ac6c-dbfb3563c3f0,},Annotations:map[string]string{io.kubernetes.container.hash: 801d663e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27620caebc3f3d73c29e2f92f539aa2cac6fdc93e0b9de2ed95f0c48539d18d2,PodSandboxId:a039025870a1875479e30b187667b4598e1aa0381a9dbd9612f543f4d6076a0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1695144515752854203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72bac6c574fd399be2b7b8bdc938a75,},Annotations:map
[string]string{io.kubernetes.container.hash: 90ffb131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292c96b4b71e4d4fe933be253a4dcb966d8f96f4da467e9d2bbd694bf0e1468b,PodSandboxId:08ff67da0ecfe4aca026b15230db5cf1b90a72c53344c213310d46386eddacc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1695144515488716784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d9f9ccdf5495d57c7770f4774e5a7d,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e97d4f494d91a5f5bf715122fdc50b432812bf68c6876e7b3ecc5bc0002e3e,PodSandboxId:b3d6bdbedbffb36d991c46285f4c6f86ca5431e07a094814c8feb09a1bd2a0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1695144515441262313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719b9cd4fd0c5c8b68d1df5700335162,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb5e6020a2c5ffa8f5b94d318ab91829ae2b0b4a5e62176a4b38ceee432de4,PodSandboxId:7b0a987201fc7be45222f39b491b3b969caa6cfa71a26e66b47326dc38eb9259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1695144515113822563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11096ac9c1e182f94956db99b3726ad6,},Annotations:map[strin
g]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=04934396-5abc-4c74-abe6-6aa399bca6b3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.649097404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=04c5a4cb-cb4e-47b0-afe4-a1bef057995e name=/runtime.v1.RuntimeService/Version
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.649179331Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=04c5a4cb-cb4e-47b0-afe4-a1bef057995e name=/runtime.v1.RuntimeService/Version
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.650051946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=418cdcdb-7ad3-4416-90d2-f3ecb150c6bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.650571286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144537650553084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=418cdcdb-7ad3-4416-90d2-f3ecb150c6bf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.651110474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ce5b8cb-42f5-4e45-b863-94597a1eba6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.651228917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ce5b8cb-42f5-4e45-b863-94597a1eba6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:28:57 test-preload-766296 crio[716]: time="2023-09-19 17:28:57.651460213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3783d6fe4bb43e1b3d2fbf6531c40e0665859716976c50140e8ae5a39e822dd,PodSandboxId:65e790f70effdb6c6087c87d65b676573bead41f327f41a15721aec0965998ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1695144526590772146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6x8tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e8bbed-dbdf-49cd-b38f-885a0eec1682,},Annotations:map[string]string{io.kubernetes.container.hash: 48a28864,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35efae451f443d0c60b4d6ea71160c59f71468b3445cfcff6887ef980e9d556f,PodSandboxId:45713cd6b25de74c4907c40326c670f0c4d7c2e490aa3e68fce148f7ab9617d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1695144523687151556,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-27fmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f42e87a7-ec92-4d35-b7aa-939cecca949a,},Annotations:map[string]string{io.kubernetes.container.hash: e41a06f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2ecf2e9714dddd7c75217dbbe1cf91d0077c552c757ac0261af7a9ebff9f60,PodSandboxId:92e3522c5cb911ffc6f1f4292cda6f31525432b2f6f56c4183d05af4e587d13d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695144523586357217,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
feb83a14-2729-4ae3-ac6c-dbfb3563c3f0,},Annotations:map[string]string{io.kubernetes.container.hash: 801d663e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27620caebc3f3d73c29e2f92f539aa2cac6fdc93e0b9de2ed95f0c48539d18d2,PodSandboxId:a039025870a1875479e30b187667b4598e1aa0381a9dbd9612f543f4d6076a0c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1695144515752854203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b72bac6c574fd399be2b7b8bdc938a75,},Annotations:map
[string]string{io.kubernetes.container.hash: 90ffb131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292c96b4b71e4d4fe933be253a4dcb966d8f96f4da467e9d2bbd694bf0e1468b,PodSandboxId:08ff67da0ecfe4aca026b15230db5cf1b90a72c53344c213310d46386eddacc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1695144515488716784,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d9f9ccdf5495d57c7770f4774e5a7d,},Annotations:map[string]string
{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0e97d4f494d91a5f5bf715122fdc50b432812bf68c6876e7b3ecc5bc0002e3e,PodSandboxId:b3d6bdbedbffb36d991c46285f4c6f86ca5431e07a094814c8feb09a1bd2a0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1695144515441262313,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 719b9cd4fd0c5c8b68d1df5700335162,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fb5e6020a2c5ffa8f5b94d318ab91829ae2b0b4a5e62176a4b38ceee432de4,PodSandboxId:7b0a987201fc7be45222f39b491b3b969caa6cfa71a26e66b47326dc38eb9259,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1695144515113822563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-766296,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11096ac9c1e182f94956db99b3726ad6,},Annotations:map[strin
g]string{io.kubernetes.container.hash: e63b93b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ce5b8cb-42f5-4e45-b863-94597a1eba6f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e3783d6fe4bb4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   11 seconds ago      Running             coredns                   1                   65e790f70effd       coredns-6d4b75cb6d-6x8tf
	35efae451f443       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   45713cd6b25de       kube-proxy-27fmr
	8a2ecf2e9714d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   92e3522c5cb91       storage-provisioner
	27620caebc3f3       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   a039025870a18       etcd-test-preload-766296
	292c96b4b71e4       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   08ff67da0ecfe       kube-scheduler-test-preload-766296
	d0e97d4f494d9       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   b3d6bdbedbffb       kube-controller-manager-test-preload-766296
	80fb5e6020a2c       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   7b0a987201fc7       kube-apiserver-test-preload-766296
	
	* 
	* ==> coredns [e3783d6fe4bb43e1b3d2fbf6531c40e0665859716976c50140e8ae5a39e822dd] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:55533 - 11612 "HINFO IN 723709921376744940.2237974909539915550. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010326995s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-766296
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-766296
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=test-preload-766296
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_26_57_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:26:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-766296
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:28:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:28:51 +0000   Tue, 19 Sep 2023 17:26:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:28:51 +0000   Tue, 19 Sep 2023 17:26:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:28:51 +0000   Tue, 19 Sep 2023 17:26:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:28:51 +0000   Tue, 19 Sep 2023 17:28:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    test-preload-766296
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7296dfc0eb764cdab4a1267390f61b65
	  System UUID:                7296dfc0-eb76-4cda-b4a1-267390f61b65
	  Boot ID:                    5d31b858-874d-4f2e-924a-b4edfbc259b8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6x8tf                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     108s
	  kube-system                 etcd-test-preload-766296                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-test-preload-766296             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-test-preload-766296    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-27fmr                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-test-preload-766296             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 105s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x5 over 2m10s)  kubelet          Node test-preload-766296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x5 over 2m10s)  kubelet          Node test-preload-766296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x5 over 2m10s)  kubelet          Node test-preload-766296 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                     kubelet          Node test-preload-766296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                     kubelet          Node test-preload-766296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                     kubelet          Node test-preload-766296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                110s                   kubelet          Node test-preload-766296 status is now: NodeReady
	  Normal  RegisteredNode           109s                   node-controller  Node test-preload-766296 event: Registered Node test-preload-766296 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)      kubelet          Node test-preload-766296 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)      kubelet          Node test-preload-766296 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)      kubelet          Node test-preload-766296 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-766296 event: Registered Node test-preload-766296 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.288882] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.357196] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147192] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Sep19 17:28] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.120366] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.113491] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.145859] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.114550] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.223401] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[ +25.007014] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[ +10.075304] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.206879] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [27620caebc3f3d73c29e2f92f539aa2cac6fdc93e0b9de2ed95f0c48539d18d2] <==
	* {"level":"info","ts":"2023-09-19T17:28:37.305Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f4acae94ef986412","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-09-19T17:28:37.306Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-19T17:28:37.306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 switched to configuration voters=(17630658595946783762)"}
	{"level":"info","ts":"2023-09-19T17:28:37.306Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","added-peer-id":"f4acae94ef986412","added-peer-peer-urls":["https://192.168.39.230:2380"]}
	{"level":"info","ts":"2023-09-19T17:28:37.306Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0aea99135fe63d","local-member-id":"f4acae94ef986412","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:28:37.307Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:28:37.316Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2023-09-19T17:28:37.316Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.230:2380"}
	{"level":"info","ts":"2023-09-19T17:28:37.316Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T17:28:37.317Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f4acae94ef986412","initial-advertise-peer-urls":["https://192.168.39.230:2380"],"listen-peer-urls":["https://192.168.39.230:2380"],"advertise-client-urls":["https://192.168.39.230:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.230:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T17:28:37.317Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2023-09-19T17:28:38.693Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:test-preload-766296 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:28:38.694Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:28:38.695Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:28:38.695Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:28:38.696Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2023-09-19T17:28:38.697Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:28:38.697Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  17:28:57 up 1 min,  0 users,  load average: 0.62, 0.20, 0.07
	Linux test-preload-766296 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [80fb5e6020a2c5ffa8f5b94d318ab91829ae2b0b4a5e62176a4b38ceee432de4] <==
	* I0919 17:28:41.202337       1 controller.go:85] Starting OpenAPI V3 controller
	I0919 17:28:41.202524       1 naming_controller.go:291] Starting NamingConditionController
	I0919 17:28:41.202832       1 establishing_controller.go:76] Starting EstablishingController
	I0919 17:28:41.203047       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0919 17:28:41.203162       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 17:28:41.203285       1 crd_finalizer.go:266] Starting CRDFinalizer
	E0919 17:28:41.293002       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0919 17:28:41.301500       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0919 17:28:41.348685       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0919 17:28:41.354488       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:28:41.362063       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0919 17:28:41.362125       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0919 17:28:41.370530       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:28:41.370595       1 cache.go:39] Caches are synced for autoregister controller
	I0919 17:28:41.377065       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 17:28:41.835531       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 17:28:42.164139       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 17:28:42.911952       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0919 17:28:42.948725       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0919 17:28:43.032162       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0919 17:28:43.080485       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:28:43.099046       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 17:28:43.915114       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0919 17:28:54.049800       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 17:28:54.144776       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [d0e97d4f494d91a5f5bf715122fdc50b432812bf68c6876e7b3ecc5bc0002e3e] <==
	* I0919 17:28:54.034495       1 shared_informer.go:262] Caches are synced for endpoint
	I0919 17:28:54.036889       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0919 17:28:54.040311       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0919 17:28:54.044058       1 shared_informer.go:262] Caches are synced for cronjob
	I0919 17:28:54.047450       1 shared_informer.go:262] Caches are synced for job
	I0919 17:28:54.052870       1 shared_informer.go:262] Caches are synced for PV protection
	I0919 17:28:54.055302       1 shared_informer.go:262] Caches are synced for namespace
	I0919 17:28:54.055367       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0919 17:28:54.145519       1 shared_informer.go:262] Caches are synced for daemon sets
	I0919 17:28:54.152357       1 shared_informer.go:262] Caches are synced for disruption
	I0919 17:28:54.152503       1 disruption.go:371] Sending events to api server.
	I0919 17:28:54.161915       1 shared_informer.go:262] Caches are synced for HPA
	I0919 17:28:54.163324       1 shared_informer.go:262] Caches are synced for deployment
	I0919 17:28:54.212792       1 shared_informer.go:262] Caches are synced for attach detach
	I0919 17:28:54.215004       1 shared_informer.go:262] Caches are synced for expand
	I0919 17:28:54.231204       1 shared_informer.go:262] Caches are synced for PVC protection
	I0919 17:28:54.239281       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0919 17:28:54.242242       1 shared_informer.go:262] Caches are synced for stateful set
	I0919 17:28:54.254477       1 shared_informer.go:262] Caches are synced for resource quota
	I0919 17:28:54.264821       1 shared_informer.go:262] Caches are synced for resource quota
	I0919 17:28:54.268423       1 shared_informer.go:262] Caches are synced for persistent volume
	I0919 17:28:54.291178       1 shared_informer.go:262] Caches are synced for ephemeral
	I0919 17:28:54.681029       1 shared_informer.go:262] Caches are synced for garbage collector
	I0919 17:28:54.684484       1 shared_informer.go:262] Caches are synced for garbage collector
	I0919 17:28:54.684519       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [35efae451f443d0c60b4d6ea71160c59f71468b3445cfcff6887ef980e9d556f] <==
	* I0919 17:28:43.866318       1 node.go:163] Successfully retrieved node IP: 192.168.39.230
	I0919 17:28:43.866590       1 server_others.go:138] "Detected node IP" address="192.168.39.230"
	I0919 17:28:43.866662       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0919 17:28:43.908576       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0919 17:28:43.908646       1 server_others.go:206] "Using iptables Proxier"
	I0919 17:28:43.908753       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0919 17:28:43.909094       1 server.go:661] "Version info" version="v1.24.4"
	I0919 17:28:43.909292       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:28:43.910068       1 config.go:317] "Starting service config controller"
	I0919 17:28:43.910136       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0919 17:28:43.910169       1 config.go:226] "Starting endpoint slice config controller"
	I0919 17:28:43.910184       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0919 17:28:43.911819       1 config.go:444] "Starting node config controller"
	I0919 17:28:43.911934       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0919 17:28:44.010435       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0919 17:28:44.010597       1 shared_informer.go:262] Caches are synced for service config
	I0919 17:28:44.012163       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [292c96b4b71e4d4fe933be253a4dcb966d8f96f4da467e9d2bbd694bf0e1468b] <==
	* I0919 17:28:37.637604       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:28:41.236498       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:28:41.236829       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:28:41.236942       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:28:41.236971       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:28:41.292725       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0919 17:28:41.292865       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:28:41.299162       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 17:28:41.299355       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 17:28:41.299467       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 17:28:41.299891       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 17:28:41.400301       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:27:59 UTC, ends at Tue 2023-09-19 17:28:58 UTC. --
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.007961    1104 topology_manager.go:200] "Topology Admit Handler"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.071813    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume\") pod \"coredns-6d4b75cb6d-6x8tf\" (UID: \"d7e8bbed-dbdf-49cd-b38f-885a0eec1682\") " pod="kube-system/coredns-6d4b75cb6d-6x8tf"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.071923    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9jj\" (UniqueName: \"kubernetes.io/projected/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-kube-api-access-vh9jj\") pod \"coredns-6d4b75cb6d-6x8tf\" (UID: \"d7e8bbed-dbdf-49cd-b38f-885a0eec1682\") " pod="kube-system/coredns-6d4b75cb6d-6x8tf"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.071949    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sr56\" (UniqueName: \"kubernetes.io/projected/feb83a14-2729-4ae3-ac6c-dbfb3563c3f0-kube-api-access-5sr56\") pod \"storage-provisioner\" (UID: \"feb83a14-2729-4ae3-ac6c-dbfb3563c3f0\") " pod="kube-system/storage-provisioner"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.071973    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f42e87a7-ec92-4d35-b7aa-939cecca949a-lib-modules\") pod \"kube-proxy-27fmr\" (UID: \"f42e87a7-ec92-4d35-b7aa-939cecca949a\") " pod="kube-system/kube-proxy-27fmr"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.071998    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/feb83a14-2729-4ae3-ac6c-dbfb3563c3f0-tmp\") pod \"storage-provisioner\" (UID: \"feb83a14-2729-4ae3-ac6c-dbfb3563c3f0\") " pod="kube-system/storage-provisioner"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.072018    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f42e87a7-ec92-4d35-b7aa-939cecca949a-kube-proxy\") pod \"kube-proxy-27fmr\" (UID: \"f42e87a7-ec92-4d35-b7aa-939cecca949a\") " pod="kube-system/kube-proxy-27fmr"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.072042    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f42e87a7-ec92-4d35-b7aa-939cecca949a-xtables-lock\") pod \"kube-proxy-27fmr\" (UID: \"f42e87a7-ec92-4d35-b7aa-939cecca949a\") " pod="kube-system/kube-proxy-27fmr"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.072060    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv285\" (UniqueName: \"kubernetes.io/projected/f42e87a7-ec92-4d35-b7aa-939cecca949a-kube-api-access-kv285\") pod \"kube-proxy-27fmr\" (UID: \"f42e87a7-ec92-4d35-b7aa-939cecca949a\") " pod="kube-system/kube-proxy-27fmr"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.072071    1104 reconciler.go:159] "Reconciler: start to sync state"
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.374944    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e3eb1c-6b52-449c-b48d-d7be312c74c9-config-volume\") pod \"e8e3eb1c-6b52-449c-b48d-d7be312c74c9\" (UID: \"e8e3eb1c-6b52-449c-b48d-d7be312c74c9\") "
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.375013    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-466n6\" (UniqueName: \"kubernetes.io/projected/e8e3eb1c-6b52-449c-b48d-d7be312c74c9-kube-api-access-466n6\") pod \"e8e3eb1c-6b52-449c-b48d-d7be312c74c9\" (UID: \"e8e3eb1c-6b52-449c-b48d-d7be312c74c9\") "
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: E0919 17:28:42.375865    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: E0919 17:28:42.375925    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume podName:d7e8bbed-dbdf-49cd-b38f-885a0eec1682 nodeName:}" failed. No retries permitted until 2023-09-19 17:28:42.875905571 +0000 UTC m=+8.986853355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume") pod "coredns-6d4b75cb6d-6x8tf" (UID: "d7e8bbed-dbdf-49cd-b38f-885a0eec1682") : object "kube-system"/"coredns" not registered
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: W0919 17:28:42.377587    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/e8e3eb1c-6b52-449c-b48d-d7be312c74c9/volumes/kubernetes.io~projected/kube-api-access-466n6: clearQuota called, but quotas disabled
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: W0919 17:28:42.377757    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/e8e3eb1c-6b52-449c-b48d-d7be312c74c9/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.377852    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e3eb1c-6b52-449c-b48d-d7be312c74c9-kube-api-access-466n6" (OuterVolumeSpecName: "kube-api-access-466n6") pod "e8e3eb1c-6b52-449c-b48d-d7be312c74c9" (UID: "e8e3eb1c-6b52-449c-b48d-d7be312c74c9"). InnerVolumeSpecName "kube-api-access-466n6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.378209    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e3eb1c-6b52-449c-b48d-d7be312c74c9-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8e3eb1c-6b52-449c-b48d-d7be312c74c9" (UID: "e8e3eb1c-6b52-449c-b48d-d7be312c74c9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.475520    1104 reconciler.go:384] "Volume detached for volume \"kube-api-access-466n6\" (UniqueName: \"kubernetes.io/projected/e8e3eb1c-6b52-449c-b48d-d7be312c74c9-kube-api-access-466n6\") on node \"test-preload-766296\" DevicePath \"\""
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: I0919 17:28:42.475548    1104 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e3eb1c-6b52-449c-b48d-d7be312c74c9-config-volume\") on node \"test-preload-766296\" DevicePath \"\""
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: E0919 17:28:42.878622    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 17:28:42 test-preload-766296 kubelet[1104]: E0919 17:28:42.878712    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume podName:d7e8bbed-dbdf-49cd-b38f-885a0eec1682 nodeName:}" failed. No retries permitted until 2023-09-19 17:28:43.878696822 +0000 UTC m=+9.989644603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume") pod "coredns-6d4b75cb6d-6x8tf" (UID: "d7e8bbed-dbdf-49cd-b38f-885a0eec1682") : object "kube-system"/"coredns" not registered
	Sep 19 17:28:43 test-preload-766296 kubelet[1104]: E0919 17:28:43.886802    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 17:28:43 test-preload-766296 kubelet[1104]: E0919 17:28:43.886897    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume podName:d7e8bbed-dbdf-49cd-b38f-885a0eec1682 nodeName:}" failed. No retries permitted until 2023-09-19 17:28:45.886881687 +0000 UTC m=+11.997829467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7e8bbed-dbdf-49cd-b38f-885a0eec1682-config-volume") pod "coredns-6d4b75cb6d-6x8tf" (UID: "d7e8bbed-dbdf-49cd-b38f-885a0eec1682") : object "kube-system"/"coredns" not registered
	Sep 19 17:28:44 test-preload-766296 kubelet[1104]: I0919 17:28:44.160081    1104 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e8e3eb1c-6b52-449c-b48d-d7be312c74c9 path="/var/lib/kubelet/pods/e8e3eb1c-6b52-449c-b48d-d7be312c74c9/volumes"
	
	* 
	* ==> storage-provisioner [8a2ecf2e9714dddd7c75217dbbe1cf91d0077c552c757ac0261af7a9ebff9f60] <==
	* I0919 17:28:43.714549       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-766296 -n test-preload-766296
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-766296 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-766296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-766296
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-766296: (1.066646745s)
--- FAIL: TestPreload (263.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (163.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1828515787.exe start -p running-upgrade-435929 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1828515787.exe start -p running-upgrade-435929 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m33.679198048s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-435929 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-435929 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (5.053728153s)

                                                
                                                
-- stdout --
	* [running-upgrade-435929] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-435929 in cluster running-upgrade-435929
	* Updating the running kvm2 "running-upgrade-435929" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:33:33.807372   37948 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:33:33.807575   37948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:33:33.807583   37948 out.go:309] Setting ErrFile to fd 2...
	I0919 17:33:33.807591   37948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:33:33.807906   37948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:33:33.808680   37948 out.go:303] Setting JSON to false
	I0919 17:33:33.809874   37948 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4564,"bootTime":1695140250,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:33:33.809955   37948 start.go:138] virtualization: kvm guest
	I0919 17:33:33.812548   37948 out.go:177] * [running-upgrade-435929] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:33:33.815256   37948 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:33:33.815229   37948 notify.go:220] Checking for updates...
	I0919 17:33:33.823982   37948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:33:33.826756   37948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:33:33.828445   37948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:33:33.830613   37948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:33:33.832522   37948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:33:33.835325   37948 config.go:182] Loaded profile config "running-upgrade-435929": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0919 17:33:33.835376   37948 start_flags.go:686] config upgrade: Driver=kvm2
	I0919 17:33:33.835400   37948 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0919 17:33:33.835520   37948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/running-upgrade-435929/config.json ...
	I0919 17:33:33.836231   37948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:33:33.836326   37948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:33:33.875587   37948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I0919 17:33:33.879975   37948 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:33:33.880909   37948 main.go:141] libmachine: Using API Version  1
	I0919 17:33:33.880973   37948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:33:33.881389   37948 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:33:33.881640   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:33.884058   37948 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:33:33.885822   37948 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:33:33.886177   37948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:33:33.886233   37948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:33:33.920597   37948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I0919 17:33:33.924533   37948 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:33:33.926969   37948 main.go:141] libmachine: Using API Version  1
	I0919 17:33:33.926996   37948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:33:33.927409   37948 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:33:33.927751   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:33.977712   37948 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:33:33.979591   37948 start.go:298] selected driver: kvm2
	I0919 17:33:33.979612   37948 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-435929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.167 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0919 17:33:33.979743   37948 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:33:33.980686   37948 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:33.980786   37948 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:33:34.003886   37948 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:33:34.004355   37948 cni.go:84] Creating CNI manager for ""
	I0919 17:33:34.004377   37948 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0919 17:33:34.004388   37948 start_flags.go:321] config:
	{Name:running-upgrade-435929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.167 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0919 17:33:34.004626   37948 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.006952   37948 out.go:177] * Starting control plane node running-upgrade-435929 in cluster running-upgrade-435929
	I0919 17:33:34.008823   37948 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0919 17:33:34.121745   37948 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0919 17:33:34.121894   37948 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/running-upgrade-435929/config.json ...
	I0919 17:33:34.121994   37948 cache.go:107] acquiring lock: {Name:mk0fc11856affac4f336bb2f1de8eb055ba2f68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122078   37948 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 17:33:34.122092   37948 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.484µs
	I0919 17:33:34.122107   37948 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 17:33:34.122122   37948 cache.go:107] acquiring lock: {Name:mk05159883ec3364195ac74c8ddaec9bd4805909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122213   37948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0919 17:33:34.122237   37948 cache.go:107] acquiring lock: {Name:mkd0c6fcd8284c10b54830addda581afe813cce6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122336   37948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0919 17:33:34.122214   37948 start.go:365] acquiring machines lock for running-upgrade-435929: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:33:34.122431   37948 start.go:369] acquired machines lock for "running-upgrade-435929" in 20.251µs
	I0919 17:33:34.122438   37948 cache.go:107] acquiring lock: {Name:mk59a9183584d382ef0e7bc5d84487802531e330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122456   37948 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:33:34.122470   37948 fix.go:54] fixHost starting: minikube
	I0919 17:33:34.122471   37948 cache.go:107] acquiring lock: {Name:mk92bc9d8ceedb9f1be25caaca69be9cb69eae1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122511   37948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0919 17:33:34.122624   37948 cache.go:107] acquiring lock: {Name:mk292b79d67e56345b66e4bc5097ee8d1502e283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122658   37948 cache.go:107] acquiring lock: {Name:mkcd130f5fb05350edf901846afe919cbfa1969d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122706   37948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0919 17:33:34.122637   37948 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0919 17:33:34.122831   37948 cache.go:107] acquiring lock: {Name:mkf6ca7fcc1c247c15d8012375de69780d47287f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:33:34.122859   37948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:33:34.122890   37948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:33:34.122919   37948 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0919 17:33:34.122934   37948 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:33:34.128670   37948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0919 17:33:34.128679   37948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0919 17:33:34.128760   37948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0919 17:33:34.128897   37948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0919 17:33:34.129019   37948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0919 17:33:34.129055   37948 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:33:34.129096   37948 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0919 17:33:34.143284   37948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0919 17:33:34.148813   37948 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:33:34.149371   37948 main.go:141] libmachine: Using API Version  1
	I0919 17:33:34.149389   37948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:33:34.149732   37948 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:33:34.149842   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:34.149915   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetState
	I0919 17:33:34.152274   37948 fix.go:102] recreateIfNeeded on running-upgrade-435929: state=Running err=<nil>
	W0919 17:33:34.152296   37948 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:33:34.156145   37948 out.go:177] * Updating the running kvm2 "running-upgrade-435929" VM ...
	I0919 17:33:34.158730   37948 machine.go:88] provisioning docker machine ...
	I0919 17:33:34.158761   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:34.158980   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetMachineName
	I0919 17:33:34.159085   37948 buildroot.go:166] provisioning hostname "running-upgrade-435929"
	I0919 17:33:34.159100   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetMachineName
	I0919 17:33:34.159187   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:34.162260   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.162853   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:34.162932   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.163249   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:34.164500   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.165005   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.165231   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:34.165353   37948 main.go:141] libmachine: Using SSH client type: native
	I0919 17:33:34.165811   37948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0919 17:33:34.165826   37948 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-435929 && echo "running-upgrade-435929" | sudo tee /etc/hostname
	I0919 17:33:34.298862   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0919 17:33:34.307188   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0919 17:33:34.351225   37948 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-435929
	
	I0919 17:33:34.351278   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:34.356204   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.356716   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:34.356796   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.357065   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:34.357267   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.357407   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.357624   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:34.357840   37948 main.go:141] libmachine: Using SSH client type: native
	I0919 17:33:34.358341   37948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0919 17:33:34.358373   37948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-435929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-435929/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-435929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:33:34.386143   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0919 17:33:34.386180   37948 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 263.525867ms
	I0919 17:33:34.386196   37948 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0919 17:33:34.429423   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0919 17:33:34.438419   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0919 17:33:34.439731   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0919 17:33:34.452113   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0919 17:33:34.452388   37948 cache.go:162] opening:  /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0919 17:33:34.537514   37948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:33:34.537546   37948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:33:34.537595   37948 buildroot.go:174] setting up certificates
	I0919 17:33:34.537606   37948 provision.go:83] configureAuth start
	I0919 17:33:34.537621   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetMachineName
	I0919 17:33:34.540545   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetIP
	I0919 17:33:34.547533   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:34.547536   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.547624   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:34.547656   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.550716   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.551115   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:34.551150   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.551487   37948 provision.go:138] copyHostCerts
	I0919 17:33:34.551541   37948 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:33:34.551554   37948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:33:34.551617   37948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:33:34.551754   37948 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:33:34.551769   37948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:33:34.551803   37948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:33:34.551895   37948 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:33:34.551904   37948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:33:34.551924   37948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:33:34.552027   37948 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-435929 san=[192.168.50.167 192.168.50.167 localhost 127.0.0.1 minikube running-upgrade-435929]
	I0919 17:33:34.724052   37948 provision.go:172] copyRemoteCerts
	I0919 17:33:34.724130   37948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:33:34.724221   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:34.728750   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.730483   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:34.730536   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:34.730572   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.730857   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.731115   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:34.731337   37948 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/running-upgrade-435929/id_rsa Username:docker}
	I0919 17:33:34.848863   37948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:33:34.880467   37948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:33:34.921924   37948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:33:34.933827   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0919 17:33:34.933854   37948 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 811.413693ms
	I0919 17:33:34.933869   37948 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0919 17:33:34.984588   37948 provision.go:86] duration metric: configureAuth took 446.9625ms
	I0919 17:33:34.984619   37948 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:33:34.984839   37948 config.go:182] Loaded profile config "running-upgrade-435929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0919 17:33:34.984963   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:34.988633   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.989390   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:34.989474   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:34.989705   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:34.989917   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.990433   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:34.990609   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:34.990767   37948 main.go:141] libmachine: Using SSH client type: native
	I0919 17:33:34.991319   37948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0919 17:33:34.991342   37948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:33:35.165696   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0919 17:33:35.165733   37948 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.043112481s
	I0919 17:33:35.165759   37948 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0919 17:33:35.389617   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0919 17:33:35.389651   37948 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.267528524s
	I0919 17:33:35.389689   37948 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0919 17:33:35.421366   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0919 17:33:35.421397   37948 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.298960199s
	I0919 17:33:35.421415   37948 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0919 17:33:35.855998   37948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:33:35.856048   37948 machine.go:91] provisioned docker machine in 1.697298718s
	I0919 17:33:35.856061   37948 start.go:300] post-start starting for "running-upgrade-435929" (driver="kvm2")
	I0919 17:33:35.856074   37948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:33:35.856095   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:35.856448   37948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:33:35.856478   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:35.859274   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:35.859872   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:35.859923   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:35.859987   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:35.860141   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:35.860239   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:35.860331   37948 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/running-upgrade-435929/id_rsa Username:docker}
	I0919 17:33:35.994017   37948 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:33:36.005417   37948 info.go:137] Remote host: Buildroot 2019.02.7
	I0919 17:33:36.005450   37948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:33:36.005525   37948 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:33:36.005635   37948 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:33:36.005757   37948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:33:36.019636   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0919 17:33:36.019672   37948 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.896842748s
	I0919 17:33:36.019687   37948 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0919 17:33:36.020469   37948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:33:36.042347   37948 cache.go:157] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0919 17:33:36.042379   37948 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.920144598s
	I0919 17:33:36.042394   37948 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0919 17:33:36.042412   37948 cache.go:87] Successfully saved all images to host disk.
	I0919 17:33:36.068041   37948 start.go:303] post-start completed in 211.965819ms
	I0919 17:33:36.068063   37948 fix.go:56] fixHost completed within 1.945592922s
	I0919 17:33:36.068082   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:36.071398   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.071826   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:36.071912   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.072119   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:36.072296   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:36.072485   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:36.072689   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:36.072925   37948 main.go:141] libmachine: Using SSH client type: native
	I0919 17:33:36.073473   37948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.167 22 <nil> <nil>}
	I0919 17:33:36.073495   37948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 17:33:36.207313   37948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695144816.204246194
	
	I0919 17:33:36.207336   37948 fix.go:206] guest clock: 1695144816.204246194
	I0919 17:33:36.207347   37948 fix.go:219] Guest: 2023-09-19 17:33:36.204246194 +0000 UTC Remote: 2023-09-19 17:33:36.068067316 +0000 UTC m=+2.320078896 (delta=136.178878ms)
	I0919 17:33:36.207376   37948 fix.go:190] guest clock delta is within tolerance: 136.178878ms
	I0919 17:33:36.207387   37948 start.go:83] releasing machines lock for "running-upgrade-435929", held for 2.084940688s
	I0919 17:33:36.207411   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:36.207695   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetIP
	I0919 17:33:36.211091   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.211494   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:36.211561   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.211720   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:36.212245   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:36.212464   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .DriverName
	I0919 17:33:36.212554   37948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:33:36.212603   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:36.212719   37948 ssh_runner.go:195] Run: cat /version.json
	I0919 17:33:36.212744   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHHostname
	I0919 17:33:36.215943   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.216659   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.216990   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:36.217038   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.217395   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:36.217597   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:36.217660   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:e1:63", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:31:41 +0000 UTC Type:0 Mac:52:54:00:8e:e1:63 Iaid: IPaddr:192.168.50.167 Prefix:24 Hostname:running-upgrade-435929 Clientid:01:52:54:00:8e:e1:63}
	I0919 17:33:36.217678   37948 main.go:141] libmachine: (running-upgrade-435929) DBG | domain running-upgrade-435929 has defined IP address 192.168.50.167 and MAC address 52:54:00:8e:e1:63 in network minikube-net
	I0919 17:33:36.217791   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:36.217828   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHPort
	I0919 17:33:36.217975   37948 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/running-upgrade-435929/id_rsa Username:docker}
	I0919 17:33:36.218580   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHKeyPath
	I0919 17:33:36.218791   37948 main.go:141] libmachine: (running-upgrade-435929) Calling .GetSSHUsername
	I0919 17:33:36.218947   37948 sshutil.go:53] new ssh client: &{IP:192.168.50.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/running-upgrade-435929/id_rsa Username:docker}
	W0919 17:33:36.333270   37948 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0919 17:33:36.333359   37948 ssh_runner.go:195] Run: systemctl --version
	I0919 17:33:36.339051   37948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:33:36.486463   37948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:33:36.500506   37948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:33:36.500586   37948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:33:36.508713   37948 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 17:33:36.508790   37948 start.go:469] detecting cgroup driver to use...
	I0919 17:33:36.508883   37948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:33:36.525030   37948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:33:36.538120   37948 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:33:36.538187   37948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:33:36.552793   37948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:33:36.563047   37948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0919 17:33:36.576286   37948 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0919 17:33:36.576388   37948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:33:36.698192   37948 docker.go:212] disabling docker service ...
	I0919 17:33:36.698255   37948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:33:37.724619   37948 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.026338555s)
	I0919 17:33:37.724753   37948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:33:37.739882   37948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:33:37.897309   37948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:33:38.066185   37948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:33:38.075834   37948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:33:38.089523   37948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:33:38.089601   37948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:33:38.289970   37948 out.go:177] 
	W0919 17:33:38.444538   37948 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0919 17:33:38.444782   37948 out.go:239] * 
	* 
	W0919 17:33:38.446063   37948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:33:38.589486   37948 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-435929 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-19 17:33:38.812620407 +0000 UTC m=+3551.744604455
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-435929 -n running-upgrade-435929
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-435929 -n running-upgrade-435929: exit status 4 (637.838819ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:33:39.032050   38006 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-435929" does not appear in /home/jenkins/minikube-integration/17240-6042/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-435929" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-435929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-435929
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-435929: (1.991519469s)
--- FAIL: TestRunningBinaryUpgrade (163.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (321.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3198021959.exe start -p stopped-upgrade-359189 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.3198021959.exe start -p stopped-upgrade-359189 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (6.248916995s)

                                                
                                                
-- stdout --
	! [stopped-upgrade-359189] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2675324593
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 94.76 KiB / 150.93 MiB [>____________] 0.06% ? p/s ?    > minikube-v1.6.0.iso: 238.76 KiB / 150.93 MiB [>___________] 0.15% ? p/s ?    > minikube-v1.6.0.iso: 590.76 KiB / 150.93 MiB [>___________] 0.38% ? p/s ?    > minikube-v1.6.0.iso: 2.20 MiB / 150.93 MiB [>] 1.46% 3.52 MiB p/s ETA 42s    > minikube-v1.6.0.iso: 8.89 MiB / 150.93 MiB [>] 5.89% 3.52 MiB p/s ETA 40s    > minikube-v1.6.0.iso: 17.42 MiB / 150.93 MiB [ 11.54% 3.52 MiB p/s ETA 37s    > minikube-v1.6.0.iso: 24.74 MiB / 150.93 MiB [ 16.39% 5.71 MiB p/s ETA 22s    > minikube-v1.6.0.iso: 32.05 MiB / 150.93 MiB [ 21.23% 5.71 MiB p/s ETA 20s    > minikube-v1.6.0.iso: 39.35 MiB / 150.93 MiB [ 26.07% 5.71 MiB p/s ETA 19s    > minikube-v1.6.0.iso: 47.61 MiB / 150.93 MiB [ 31.54% 7.80 MiB p/s ETA 13s    > minikube-v1.6.0.iso: 55.39 MiB / 150.93 MiB [ 36.70% 7.80 MiB p/s ETA 12s    > minikube-v1.6.0.iso: 58.19 MiB / 150.93 MiB [ 38.56% 7.80 MiB
p/s ETA 11s    > minikube-v1.6.0.iso: 64.77 MiB / 150.93 MiB [] 42.92% 9.15 MiB p/s ETA 9s    > minikube-v1.6.0.iso: 71.49 MiB / 150.93 MiB [] 47.37% 9.15 MiB p/s ETA 8s    > minikube-v1.6.0.iso: 78.32 MiB / 150.93 MiB [] 51.89% 9.15 MiB p/s ETA 7s    > minikube-v1.6.0.iso: 84.82 MiB / 150.93 MiB [ 56.20% 10.71 MiB p/s ETA 6s    > minikube-v1.6.0.iso: 92.12 MiB / 150.93 MiB [ 61.03% 10.71 MiB p/s ETA 5s    > minikube-v1.6.0.iso: 101.39 MiB / 150.93 MiB  67.18% 10.71 MiB p/s ETA 4s    > minikube-v1.6.0.iso: 109.26 MiB / 150.93 MiB  72.39% 12.65 MiB p/s ETA 3s    > minikube-v1.6.0.iso: 117.68 MiB / 150.93 MiB  77.97% 12.65 MiB p/s ETA 2s    > minikube-v1.6.0.iso: 124.98 MiB / 150.93 MiB  82.81% 12.65 MiB p/s ETA 2s    > minikube-v1.6.0.iso: 134.00 MiB / 150.93 MiB  88.78% 14.49 MiB p/s ETA 1s    > minikube-v1.6.0.iso: 140.14 MiB / 150.93 MiB  92.85% 14.49 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 150.02 MiB / 150.93 MiB  99.40% 14.49 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [-] 100.00% 3
2.58 MiB p/s 5s* 
	X Failed to cache ISO: rename /home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/minikube-v1.6.0.iso.download /home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/minikube-v1.6.0.iso: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3198021959.exe start -p stopped-upgrade-359189 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0919 17:31:14.060281   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3198021959.exe start -p stopped-upgrade-359189 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m26.601059068s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3198021959.exe -p stopped-upgrade-359189 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3198021959.exe -p stopped-upgrade-359189 stop: (1m33.017990861s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-359189 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-359189 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m14.958953717s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-359189] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-359189 in cluster stopped-upgrade-359189
	* Restarting existing kvm2 VM for "stopped-upgrade-359189" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:35:06.766532   39418 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:35:06.766800   39418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:35:06.766810   39418 out.go:309] Setting ErrFile to fd 2...
	I0919 17:35:06.766814   39418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:35:06.767017   39418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:35:06.767598   39418 out.go:303] Setting JSON to false
	I0919 17:35:06.768517   39418 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4657,"bootTime":1695140250,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:35:06.768568   39418 start.go:138] virtualization: kvm guest
	I0919 17:35:06.770499   39418 out.go:177] * [stopped-upgrade-359189] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:35:06.772126   39418 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:35:06.773409   39418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:35:06.772161   39418 notify.go:220] Checking for updates...
	I0919 17:35:06.774879   39418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:35:06.776110   39418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:35:06.777402   39418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:35:06.778619   39418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:35:06.780207   39418 config.go:182] Loaded profile config "stopped-upgrade-359189": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0919 17:35:06.780224   39418 start_flags.go:686] config upgrade: Driver=kvm2
	I0919 17:35:06.780235   39418 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0919 17:35:06.780368   39418 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/stopped-upgrade-359189/config.json ...
	I0919 17:35:06.780966   39418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:35:06.781044   39418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:35:06.795882   39418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0919 17:35:06.796324   39418 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:35:06.796934   39418 main.go:141] libmachine: Using API Version  1
	I0919 17:35:06.796960   39418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:35:06.797396   39418 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:35:06.797589   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:35:06.799930   39418 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:35:06.801359   39418 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:35:06.801712   39418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:35:06.801760   39418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:35:06.815881   39418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0919 17:35:06.816235   39418 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:35:06.816772   39418 main.go:141] libmachine: Using API Version  1
	I0919 17:35:06.816805   39418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:35:06.817208   39418 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:35:06.817398   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:35:06.852507   39418 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:35:06.853883   39418 start.go:298] selected driver: kvm2
	I0919 17:35:06.853895   39418 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-359189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.38 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0919 17:35:06.854015   39418 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:35:06.854639   39418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.854715   39418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:35:06.868351   39418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:35:06.868678   39418 cni.go:84] Creating CNI manager for ""
	I0919 17:35:06.868700   39418 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0919 17:35:06.868708   39418 start_flags.go:321] config:
	{Name:stopped-upgrade-359189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.38 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0919 17:35:06.868904   39418 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.870728   39418 out.go:177] * Starting control plane node stopped-upgrade-359189 in cluster stopped-upgrade-359189
	I0919 17:35:06.872042   39418 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0919 17:35:06.984885   39418 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0919 17:35:06.985019   39418 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/stopped-upgrade-359189/config.json ...
	I0919 17:35:06.985111   39418 cache.go:107] acquiring lock: {Name:mkd0c6fcd8284c10b54830addda581afe813cce6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985144   39418 cache.go:107] acquiring lock: {Name:mk59a9183584d382ef0e7bc5d84487802531e330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985147   39418 cache.go:107] acquiring lock: {Name:mkcd130f5fb05350edf901846afe919cbfa1969d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985202   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0919 17:35:06.985107   39418 cache.go:107] acquiring lock: {Name:mk0fc11856affac4f336bb2f1de8eb055ba2f68a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985217   39418 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 109.537µs
	I0919 17:35:06.985223   39418 cache.go:107] acquiring lock: {Name:mk92bc9d8ceedb9f1be25caaca69be9cb69eae1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985283   39418 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0919 17:35:06.985285   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0919 17:35:06.985303   39418 start.go:365] acquiring machines lock for stopped-upgrade-359189: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:35:06.985320   39418 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 164.68µs
	I0919 17:35:06.985336   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0919 17:35:06.985343   39418 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0919 17:35:06.985308   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0919 17:35:06.985349   39418 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 127.407µs
	I0919 17:35:06.985359   39418 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0919 17:35:06.985354   39418 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 218.979µs
	I0919 17:35:06.985325   39418 cache.go:107] acquiring lock: {Name:mkf6ca7fcc1c247c15d8012375de69780d47287f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985370   39418 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0919 17:35:06.985363   39418 cache.go:107] acquiring lock: {Name:mk05159883ec3364195ac74c8ddaec9bd4805909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985488   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0919 17:35:06.985506   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0919 17:35:06.985506   39418 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 414.318µs
	I0919 17:35:06.985526   39418 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0919 17:35:06.985519   39418 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 215.317µs
	I0919 17:35:06.985495   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0919 17:35:06.985533   39418 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0919 17:35:06.985499   39418 cache.go:107] acquiring lock: {Name:mk292b79d67e56345b66e4bc5097ee8d1502e283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:35:06.985541   39418 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 260.279µs
	I0919 17:35:06.985555   39418 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0919 17:35:06.985567   39418 cache.go:115] /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0919 17:35:06.985579   39418 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 121.693µs
	I0919 17:35:06.985587   39418 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0919 17:35:06.985605   39418 cache.go:87] Successfully saved all images to host disk.
	I0919 17:35:40.781251   39418 start.go:369] acquired machines lock for "stopped-upgrade-359189" in 33.795921266s
	I0919 17:35:40.781303   39418 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:35:40.781308   39418 fix.go:54] fixHost starting: minikube
	I0919 17:35:40.781740   39418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:35:40.781805   39418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:35:40.798487   39418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0919 17:35:40.798882   39418 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:35:40.799506   39418 main.go:141] libmachine: Using API Version  1
	I0919 17:35:40.799531   39418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:35:40.799880   39418 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:35:40.800074   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:35:40.800258   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetState
	I0919 17:35:40.801801   39418 fix.go:102] recreateIfNeeded on stopped-upgrade-359189: state=Stopped err=<nil>
	I0919 17:35:40.801835   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	W0919 17:35:40.802017   39418 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:35:40.804139   39418 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-359189" ...
	I0919 17:35:40.805691   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .Start
	I0919 17:35:40.805876   39418 main.go:141] libmachine: (stopped-upgrade-359189) Ensuring networks are active...
	I0919 17:35:40.806634   39418 main.go:141] libmachine: (stopped-upgrade-359189) Ensuring network default is active
	I0919 17:35:40.807050   39418 main.go:141] libmachine: (stopped-upgrade-359189) Ensuring network minikube-net is active
	I0919 17:35:40.807435   39418 main.go:141] libmachine: (stopped-upgrade-359189) Getting domain xml...
	I0919 17:35:40.808142   39418 main.go:141] libmachine: (stopped-upgrade-359189) Creating domain...
	I0919 17:35:42.070221   39418 main.go:141] libmachine: (stopped-upgrade-359189) Waiting to get IP...
	I0919 17:35:42.071310   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:42.071679   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:42.071762   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:42.071677   39878 retry.go:31] will retry after 276.289979ms: waiting for machine to come up
	I0919 17:35:42.349340   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:42.350058   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:42.350091   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:42.350006   39878 retry.go:31] will retry after 379.707584ms: waiting for machine to come up
	I0919 17:35:42.731677   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:42.732285   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:42.732315   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:42.732238   39878 retry.go:31] will retry after 395.949747ms: waiting for machine to come up
	I0919 17:35:43.130014   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:43.130551   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:43.130584   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:43.130516   39878 retry.go:31] will retry after 391.112545ms: waiting for machine to come up
	I0919 17:35:43.523224   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:43.523927   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:43.523958   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:43.523884   39878 retry.go:31] will retry after 572.837665ms: waiting for machine to come up
	I0919 17:35:44.098883   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:44.099398   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:44.099424   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:44.099326   39878 retry.go:31] will retry after 655.80371ms: waiting for machine to come up
	I0919 17:35:44.756783   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:44.757270   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:44.757299   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:44.757228   39878 retry.go:31] will retry after 722.375612ms: waiting for machine to come up
	I0919 17:35:45.481159   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:45.481599   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:45.481629   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:45.481563   39878 retry.go:31] will retry after 1.281837312s: waiting for machine to come up
	I0919 17:35:46.764740   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:46.765350   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:46.765380   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:46.765293   39878 retry.go:31] will retry after 1.5270038s: waiting for machine to come up
	I0919 17:35:48.293437   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:48.293900   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:48.293933   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:48.293864   39878 retry.go:31] will retry after 1.76294852s: waiting for machine to come up
	I0919 17:35:50.058159   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:50.058620   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:50.058656   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:50.058546   39878 retry.go:31] will retry after 1.962073946s: waiting for machine to come up
	I0919 17:35:52.022746   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:52.023212   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:52.023240   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:52.023160   39878 retry.go:31] will retry after 2.911146533s: waiting for machine to come up
	I0919 17:35:54.938333   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:54.938833   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:54.938864   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:54.938783   39878 retry.go:31] will retry after 3.567645052s: waiting for machine to come up
	I0919 17:35:58.509635   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:35:58.510178   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:35:58.510208   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:35:58.510133   39878 retry.go:31] will retry after 5.110440963s: waiting for machine to come up
	I0919 17:36:03.622679   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:03.623137   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:36:03.623165   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:36:03.623099   39878 retry.go:31] will retry after 5.961951041s: waiting for machine to come up
	I0919 17:36:09.586699   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:09.587210   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | unable to find current IP address of domain stopped-upgrade-359189 in network minikube-net
	I0919 17:36:09.587243   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | I0919 17:36:09.587160   39878 retry.go:31] will retry after 7.501772125s: waiting for machine to come up
	I0919 17:36:17.091100   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.091485   39418 main.go:141] libmachine: (stopped-upgrade-359189) Found IP for machine: 192.168.50.38
	I0919 17:36:17.091512   39418 main.go:141] libmachine: (stopped-upgrade-359189) Reserving static IP address...
	I0919 17:36:17.091528   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has current primary IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.091978   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "stopped-upgrade-359189", mac: "52:54:00:ca:90:ba", ip: "192.168.50.38"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.092023   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-359189", mac: "52:54:00:ca:90:ba", ip: "192.168.50.38"}
	I0919 17:36:17.092037   39418 main.go:141] libmachine: (stopped-upgrade-359189) Reserved static IP address: 192.168.50.38
	I0919 17:36:17.092051   39418 main.go:141] libmachine: (stopped-upgrade-359189) Waiting for SSH to be available...
	I0919 17:36:17.092072   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | Getting to WaitForSSH function...
	I0919 17:36:17.094529   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.094872   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.094920   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.095020   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | Using SSH client type: external
	I0919 17:36:17.095056   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/stopped-upgrade-359189/id_rsa (-rw-------)
	I0919 17:36:17.095121   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/stopped-upgrade-359189/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:36:17.095181   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | About to run SSH command:
	I0919 17:36:17.095201   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | exit 0
	I0919 17:36:17.219751   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | SSH cmd err, output: <nil>: 
	I0919 17:36:17.220114   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetConfigRaw
	I0919 17:36:17.220816   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetIP
	I0919 17:36:17.223360   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.223719   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.223750   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.224017   39418 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/stopped-upgrade-359189/config.json ...
	I0919 17:36:17.224232   39418 machine.go:88] provisioning docker machine ...
	I0919 17:36:17.224256   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:36:17.224493   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetMachineName
	I0919 17:36:17.224648   39418 buildroot.go:166] provisioning hostname "stopped-upgrade-359189"
	I0919 17:36:17.224671   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetMachineName
	I0919 17:36:17.224900   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:17.227386   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.227874   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.227909   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.228037   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:17.228203   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.228342   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.228499   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:17.228644   39418 main.go:141] libmachine: Using SSH client type: native
	I0919 17:36:17.228978   39418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0919 17:36:17.229018   39418 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-359189 && echo "stopped-upgrade-359189" | sudo tee /etc/hostname
	I0919 17:36:17.344167   39418 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-359189
	
	I0919 17:36:17.344194   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:17.347080   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.347525   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.347570   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.347674   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:17.347913   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.348081   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.348199   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:17.348323   39418 main.go:141] libmachine: Using SSH client type: native
	I0919 17:36:17.348676   39418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0919 17:36:17.348696   39418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-359189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-359189/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-359189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:36:17.461576   39418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:36:17.461611   39418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:36:17.461649   39418 buildroot.go:174] setting up certificates
	I0919 17:36:17.461661   39418 provision.go:83] configureAuth start
	I0919 17:36:17.461680   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetMachineName
	I0919 17:36:17.461948   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetIP
	I0919 17:36:17.464442   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.464769   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.464801   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.464946   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:17.467427   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.467767   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.467800   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.467935   39418 provision.go:138] copyHostCerts
	I0919 17:36:17.467989   39418 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:36:17.468001   39418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:36:17.468065   39418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:36:17.468161   39418 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:36:17.468172   39418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:36:17.468205   39418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:36:17.468250   39418 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:36:17.468257   39418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:36:17.468287   39418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:36:17.468330   39418 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-359189 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube stopped-upgrade-359189]
	I0919 17:36:17.629119   39418 provision.go:172] copyRemoteCerts
	I0919 17:36:17.629174   39418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:36:17.629196   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:17.631641   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.632030   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.632067   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.632234   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:17.632447   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.632596   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:17.632741   39418 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/stopped-upgrade-359189/id_rsa Username:docker}
	I0919 17:36:17.719238   39418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:36:17.733721   39418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:36:17.746719   39418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:36:17.759370   39418 provision.go:86] duration metric: configureAuth took 297.691502ms
	I0919 17:36:17.759397   39418 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:36:17.759553   39418 config.go:182] Loaded profile config "stopped-upgrade-359189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0919 17:36:17.759616   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:17.762421   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.762829   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:17.762866   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:17.763050   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:17.763238   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.763398   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:17.763535   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:17.763670   39418 main.go:141] libmachine: Using SSH client type: native
	I0919 17:36:17.763966   39418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0919 17:36:17.763982   39418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:36:20.805925   39418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:36:20.805952   39418 machine.go:91] provisioned docker machine in 3.581705785s
	I0919 17:36:20.805973   39418 start.go:300] post-start starting for "stopped-upgrade-359189" (driver="kvm2")
	I0919 17:36:20.805983   39418 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:36:20.806002   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:36:20.806336   39418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:36:20.806364   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:20.809159   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:20.809600   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:20.809634   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:20.809761   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:20.809963   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:20.810122   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:20.810293   39418 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/stopped-upgrade-359189/id_rsa Username:docker}
	I0919 17:36:20.895446   39418 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:36:20.899603   39418 info.go:137] Remote host: Buildroot 2019.02.7
	I0919 17:36:20.899623   39418 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:36:20.899677   39418 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:36:20.899741   39418 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:36:20.899819   39418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:36:20.905186   39418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:36:20.920123   39418 start.go:303] post-start completed in 114.134789ms
	I0919 17:36:20.920142   39418 fix.go:56] fixHost completed within 40.138833603s
	I0919 17:36:20.920172   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:20.922745   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:20.923148   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:20.923215   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:20.923335   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:20.923531   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:20.923671   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:20.923831   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:20.923965   39418 main.go:141] libmachine: Using SSH client type: native
	I0919 17:36:20.924359   39418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0919 17:36:20.924372   39418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 17:36:21.032753   39418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695144980.964437685
	
	I0919 17:36:21.032778   39418 fix.go:206] guest clock: 1695144980.964437685
	I0919 17:36:21.032788   39418 fix.go:219] Guest: 2023-09-19 17:36:20.964437685 +0000 UTC Remote: 2023-09-19 17:36:20.920145877 +0000 UTC m=+74.183801209 (delta=44.291808ms)
	I0919 17:36:21.032808   39418 fix.go:190] guest clock delta is within tolerance: 44.291808ms
	I0919 17:36:21.032814   39418 start.go:83] releasing machines lock for "stopped-upgrade-359189", held for 40.251533995s
	I0919 17:36:21.032839   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:36:21.033135   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetIP
	I0919 17:36:21.035706   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:21.036109   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:21.036143   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:21.036466   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:36:21.037001   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:36:21.037193   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .DriverName
	I0919 17:36:21.037277   39418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:36:21.037345   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:21.037414   39418 ssh_runner.go:195] Run: cat /version.json
	I0919 17:36:21.037439   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHHostname
	I0919 17:36:21.040259   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:21.040286   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:21.040670   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:21.040707   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:21.040736   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:90:ba", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-19 18:36:06 +0000 UTC Type:0 Mac:52:54:00:ca:90:ba Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:stopped-upgrade-359189 Clientid:01:52:54:00:ca:90:ba}
	I0919 17:36:21.040756   39418 main.go:141] libmachine: (stopped-upgrade-359189) DBG | domain stopped-upgrade-359189 has defined IP address 192.168.50.38 and MAC address 52:54:00:ca:90:ba in network minikube-net
	I0919 17:36:21.040849   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:21.040976   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHPort
	I0919 17:36:21.041090   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:21.041155   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHKeyPath
	I0919 17:36:21.041333   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:21.041384   39418 main.go:141] libmachine: (stopped-upgrade-359189) Calling .GetSSHUsername
	I0919 17:36:21.041580   39418 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/stopped-upgrade-359189/id_rsa Username:docker}
	I0919 17:36:21.041650   39418 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/stopped-upgrade-359189/id_rsa Username:docker}
	W0919 17:36:21.153140   39418 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0919 17:36:21.153214   39418 ssh_runner.go:195] Run: systemctl --version
	I0919 17:36:21.158081   39418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:36:21.324584   39418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:36:21.329976   39418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:36:21.330055   39418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:36:21.335100   39418 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 17:36:21.335123   39418 start.go:469] detecting cgroup driver to use...
	I0919 17:36:21.335188   39418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:36:21.345242   39418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:36:21.354586   39418 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:36:21.354632   39418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:36:21.362585   39418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:36:21.370658   39418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0919 17:36:21.379024   39418 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0919 17:36:21.379083   39418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:36:21.462330   39418 docker.go:212] disabling docker service ...
	I0919 17:36:21.462403   39418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:36:21.473452   39418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:36:21.480983   39418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:36:21.565102   39418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:36:21.650681   39418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:36:21.659520   39418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:36:21.670841   39418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:36:21.670916   39418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:36:21.680096   39418 out.go:177] 
	W0919 17:36:21.681841   39418 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0919 17:36:21.681864   39418 out.go:239] * 
	* 
	W0919 17:36:21.682790   39418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:36:21.684335   39418 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-359189 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (321.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (101.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-169801 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-169801 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m37.573124616s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-169801] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-169801 in cluster pause-169801
	* Updating the running kvm2 "pause-169801" VM ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-169801" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:34:49.048206   39107 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:34:49.048619   39107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:34:49.048660   39107 out.go:309] Setting ErrFile to fd 2...
	I0919 17:34:49.048679   39107 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:34:49.049013   39107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:34:49.049790   39107 out.go:303] Setting JSON to false
	I0919 17:34:49.051091   39107 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4639,"bootTime":1695140250,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:34:49.051199   39107 start.go:138] virtualization: kvm guest
	I0919 17:34:49.053556   39107 out.go:177] * [pause-169801] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:34:49.055263   39107 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:34:49.055305   39107 notify.go:220] Checking for updates...
	I0919 17:34:49.057058   39107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:34:49.058559   39107 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:34:49.059882   39107 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:34:49.061214   39107 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:34:49.062601   39107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:34:49.064609   39107 config.go:182] Loaded profile config "pause-169801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:34:49.065200   39107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:34:49.065270   39107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:34:49.082795   39107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40645
	I0919 17:34:49.083249   39107 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:34:49.083788   39107 main.go:141] libmachine: Using API Version  1
	I0919 17:34:49.083812   39107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:34:49.084128   39107 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:34:49.084333   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:34:49.084613   39107 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:34:49.085021   39107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:34:49.085069   39107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:34:49.100672   39107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0919 17:34:49.101214   39107 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:34:49.101782   39107 main.go:141] libmachine: Using API Version  1
	I0919 17:34:49.101807   39107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:34:49.102184   39107 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:34:49.102391   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:34:49.149190   39107 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:34:49.150501   39107 start.go:298] selected driver: kvm2
	I0919 17:34:49.150517   39107 start.go:902] validating driver "kvm2" against &{Name:pause-169801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.2 ClusterName:pause-169801 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:34:49.150727   39107 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:34:49.151132   39107 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:34:49.151208   39107 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:34:49.166103   39107 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:34:49.166789   39107 cni.go:84] Creating CNI manager for ""
	I0919 17:34:49.166804   39107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:34:49.166812   39107 start_flags.go:321] config:
	{Name:pause-169801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-169801 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:34:49.166987   39107 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:34:49.169021   39107 out.go:177] * Starting control plane node pause-169801 in cluster pause-169801
	I0919 17:34:49.170487   39107 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:34:49.170537   39107 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 17:34:49.170550   39107 cache.go:57] Caching tarball of preloaded images
	I0919 17:34:49.170646   39107 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:34:49.170660   39107 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:34:49.170835   39107 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/config.json ...
	I0919 17:34:49.171056   39107 start.go:365] acquiring machines lock for pause-169801: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:35:07.521623   39107 start.go:369] acquired machines lock for "pause-169801" in 18.350525052s
	I0919 17:35:07.521690   39107 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:35:07.521703   39107 fix.go:54] fixHost starting: 
	I0919 17:35:07.522091   39107 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:35:07.522149   39107 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:35:07.538920   39107 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0919 17:35:07.539314   39107 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:35:07.539770   39107 main.go:141] libmachine: Using API Version  1
	I0919 17:35:07.539793   39107 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:35:07.540132   39107 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:35:07.540333   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:07.540496   39107 main.go:141] libmachine: (pause-169801) Calling .GetState
	I0919 17:35:07.542083   39107 fix.go:102] recreateIfNeeded on pause-169801: state=Running err=<nil>
	W0919 17:35:07.542099   39107 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:35:07.544111   39107 out.go:177] * Updating the running kvm2 "pause-169801" VM ...
	I0919 17:35:07.545556   39107 machine.go:88] provisioning docker machine ...
	I0919 17:35:07.545579   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:07.545783   39107 main.go:141] libmachine: (pause-169801) Calling .GetMachineName
	I0919 17:35:07.545939   39107 buildroot.go:166] provisioning hostname "pause-169801"
	I0919 17:35:07.545968   39107 main.go:141] libmachine: (pause-169801) Calling .GetMachineName
	I0919 17:35:07.546142   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:07.548587   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.549083   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:07.549119   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.549291   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:07.549476   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:07.549599   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:07.549741   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:07.549883   39107 main.go:141] libmachine: Using SSH client type: native
	I0919 17:35:07.550379   39107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0919 17:35:07.550401   39107 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-169801 && echo "pause-169801" | sudo tee /etc/hostname
	I0919 17:35:07.684583   39107 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-169801
	
	I0919 17:35:07.684608   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:07.687551   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.687909   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:07.687942   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.688164   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:07.688353   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:07.688526   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:07.688692   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:07.688859   39107 main.go:141] libmachine: Using SSH client type: native
	I0919 17:35:07.689309   39107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0919 17:35:07.689328   39107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-169801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-169801/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-169801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:35:07.805760   39107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:35:07.805789   39107 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:35:07.805827   39107 buildroot.go:174] setting up certificates
	I0919 17:35:07.805850   39107 provision.go:83] configureAuth start
	I0919 17:35:07.805871   39107 main.go:141] libmachine: (pause-169801) Calling .GetMachineName
	I0919 17:35:07.806148   39107 main.go:141] libmachine: (pause-169801) Calling .GetIP
	I0919 17:35:07.809179   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.809586   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:07.809618   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.809782   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:07.812232   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.812598   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:07.812630   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.812753   39107 provision.go:138] copyHostCerts
	I0919 17:35:07.812805   39107 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:35:07.812818   39107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:35:07.812882   39107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:35:07.813054   39107 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:35:07.813066   39107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:35:07.813098   39107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:35:07.813192   39107 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:35:07.813206   39107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:35:07.813230   39107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:35:07.813307   39107 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.pause-169801 san=[192.168.39.19 192.168.39.19 localhost 127.0.0.1 minikube pause-169801]
	I0919 17:35:07.888320   39107 provision.go:172] copyRemoteCerts
	I0919 17:35:07.888398   39107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:35:07.888465   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:07.891201   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.891583   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:07.891619   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:07.891892   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:07.892104   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:07.895204   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:07.895407   39107 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/pause-169801/id_rsa Username:docker}
	I0919 17:35:07.985461   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:35:08.014113   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:35:08.040608   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0919 17:35:08.066215   39107 provision.go:86] duration metric: configureAuth took 260.348569ms
	I0919 17:35:08.066244   39107 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:35:08.066466   39107 config.go:182] Loaded profile config "pause-169801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:35:08.066546   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:08.069475   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:08.069869   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:08.069903   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:08.070090   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:08.070286   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:08.070466   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:08.070631   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:08.070790   39107 main.go:141] libmachine: Using SSH client type: native
	I0919 17:35:08.071146   39107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0919 17:35:08.071165   39107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:35:13.640427   39107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:35:13.640452   39107 machine.go:91] provisioned docker machine in 6.094881725s
	I0919 17:35:13.640479   39107 start.go:300] post-start starting for "pause-169801" (driver="kvm2")
	I0919 17:35:13.640492   39107 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:35:13.640524   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:13.640886   39107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:35:13.640922   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:13.643881   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:13.644401   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:13.644446   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:13.644625   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:13.644820   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:13.645012   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:13.645143   39107 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/pause-169801/id_rsa Username:docker}
	I0919 17:35:14.025897   39107 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:35:14.040611   39107 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:35:14.040640   39107 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:35:14.040726   39107 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:35:14.040826   39107 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:35:14.040940   39107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:35:14.071482   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:35:14.115646   39107 start.go:303] post-start completed in 475.152163ms
	I0919 17:35:14.115668   39107 fix.go:56] fixHost completed within 6.593965935s
	I0919 17:35:14.115693   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:14.118555   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.118935   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:14.118982   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.119080   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:14.119274   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:14.119465   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:14.119611   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:14.119766   39107 main.go:141] libmachine: Using SSH client type: native
	I0919 17:35:14.120114   39107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I0919 17:35:14.120126   39107 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 17:35:14.276466   39107 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695144914.273514343
	
	I0919 17:35:14.276488   39107 fix.go:206] guest clock: 1695144914.273514343
	I0919 17:35:14.276503   39107 fix.go:219] Guest: 2023-09-19 17:35:14.273514343 +0000 UTC Remote: 2023-09-19 17:35:14.115672826 +0000 UTC m=+25.109447639 (delta=157.841517ms)
	I0919 17:35:14.276519   39107 fix.go:190] guest clock delta is within tolerance: 157.841517ms
	I0919 17:35:14.276524   39107 start.go:83] releasing machines lock for "pause-169801", held for 6.7548676s
	I0919 17:35:14.276548   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:14.276821   39107 main.go:141] libmachine: (pause-169801) Calling .GetIP
	I0919 17:35:14.280009   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.280424   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:14.280454   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.280657   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:14.281226   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:14.281427   39107 main.go:141] libmachine: (pause-169801) Calling .DriverName
	I0919 17:35:14.281561   39107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:35:14.281604   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:14.281650   39107 ssh_runner.go:195] Run: cat /version.json
	I0919 17:35:14.281683   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHHostname
	I0919 17:35:14.284446   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.284761   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.284845   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:14.284878   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.285027   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:14.285214   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:14.285277   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:14.285307   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:14.285397   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:14.285543   39107 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/pause-169801/id_rsa Username:docker}
	I0919 17:35:14.285577   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHPort
	I0919 17:35:14.285726   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHKeyPath
	I0919 17:35:14.285893   39107 main.go:141] libmachine: (pause-169801) Calling .GetSSHUsername
	I0919 17:35:14.286028   39107 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/pause-169801/id_rsa Username:docker}
	I0919 17:35:14.396873   39107 ssh_runner.go:195] Run: systemctl --version
	I0919 17:35:14.431388   39107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:35:14.657942   39107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:35:14.671281   39107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:35:14.671356   39107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:35:14.690589   39107 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 17:35:14.690620   39107 start.go:469] detecting cgroup driver to use...
	I0919 17:35:14.690702   39107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:35:14.754700   39107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:35:14.795226   39107 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:35:14.795285   39107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:35:14.819317   39107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:35:14.841473   39107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:35:15.132697   39107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:35:15.420310   39107 docker.go:212] disabling docker service ...
	I0919 17:35:15.420377   39107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:35:15.441764   39107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:35:15.461374   39107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:35:15.805573   39107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:35:16.060183   39107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:35:16.087725   39107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:35:16.118443   39107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 17:35:16.118506   39107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:35:16.136786   39107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:35:16.136871   39107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:35:16.151681   39107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:35:16.168725   39107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:35:16.182687   39107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:35:16.197847   39107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:35:16.211374   39107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:35:16.225356   39107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:35:16.431572   39107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:35:18.114703   39107 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.683092641s)
	I0919 17:35:18.114733   39107 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:35:18.114798   39107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:35:18.120290   39107 start.go:537] Will wait 60s for crictl version
	I0919 17:35:18.120348   39107 ssh_runner.go:195] Run: which crictl
	I0919 17:35:18.124340   39107 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:35:18.169019   39107 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:35:18.169125   39107 ssh_runner.go:195] Run: crio --version
	I0919 17:35:18.222949   39107 ssh_runner.go:195] Run: crio --version
	I0919 17:35:18.272027   39107 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 17:35:18.273522   39107 main.go:141] libmachine: (pause-169801) Calling .GetIP
	I0919 17:35:18.276178   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:18.276595   39107 main.go:141] libmachine: (pause-169801) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a3:5c", ip: ""} in network mk-pause-169801: {Iface:virbr1 ExpiryTime:2023-09-19 18:33:20 +0000 UTC Type:0 Mac:52:54:00:db:a3:5c Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:pause-169801 Clientid:01:52:54:00:db:a3:5c}
	I0919 17:35:18.276627   39107 main.go:141] libmachine: (pause-169801) DBG | domain pause-169801 has defined IP address 192.168.39.19 and MAC address 52:54:00:db:a3:5c in network mk-pause-169801
	I0919 17:35:18.276815   39107 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 17:35:18.281317   39107 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:35:18.281386   39107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:35:18.330354   39107 crio.go:496] all images are preloaded for cri-o runtime.
	I0919 17:35:18.330374   39107 crio.go:415] Images already preloaded, skipping extraction
	I0919 17:35:18.330426   39107 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:35:18.370493   39107 crio.go:496] all images are preloaded for cri-o runtime.
	I0919 17:35:18.370513   39107 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:35:18.370567   39107 ssh_runner.go:195] Run: crio config
	I0919 17:35:18.434095   39107 cni.go:84] Creating CNI manager for ""
	I0919 17:35:18.434122   39107 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:35:18.434143   39107 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:35:18.434166   39107 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-169801 NodeName:pause-169801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:35:18.434305   39107 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-169801"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:35:18.434367   39107 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-169801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-169801 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:35:18.434419   39107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:35:18.444687   39107 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:35:18.444755   39107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:35:18.454212   39107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0919 17:35:18.470716   39107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:35:18.487515   39107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0919 17:35:18.505979   39107 ssh_runner.go:195] Run: grep 192.168.39.19	control-plane.minikube.internal$ /etc/hosts
	I0919 17:35:18.510140   39107 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801 for IP: 192.168.39.19
	I0919 17:35:18.510172   39107 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:35:18.510334   39107 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:35:18.510385   39107 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:35:18.510469   39107 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/client.key
	I0919 17:35:18.510553   39107 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/apiserver.key.8a6f02ba
	I0919 17:35:18.510609   39107 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/proxy-client.key
	I0919 17:35:18.510746   39107 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:35:18.510787   39107 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:35:18.510801   39107 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:35:18.510839   39107 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:35:18.510878   39107 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:35:18.510912   39107 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:35:18.510966   39107 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:35:18.511712   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:35:18.535343   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 17:35:18.557853   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:35:18.580691   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/pause-169801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:35:18.607525   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:35:18.634366   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:35:18.658874   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:35:18.683330   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:35:18.707228   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:35:18.823638   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:35:19.179917   39107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:35:19.243746   39107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:35:19.271919   39107 ssh_runner.go:195] Run: openssl version
	I0919 17:35:19.281596   39107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:35:19.301429   39107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:35:19.311528   39107 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:35:19.311596   39107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:35:19.323642   39107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:35:19.341475   39107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:35:19.358114   39107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:35:19.363278   39107 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:35:19.363339   39107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:35:19.371308   39107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:35:19.387777   39107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:35:19.402746   39107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:35:19.407399   39107 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:35:19.407459   39107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:35:19.414189   39107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:35:19.427558   39107 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:35:19.432930   39107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:35:19.438430   39107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:35:19.445264   39107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:35:19.450553   39107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:35:19.455811   39107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:35:19.461347   39107 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:35:19.466951   39107 kubeadm.go:404] StartCluster: {Name:pause-169801 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.2 ClusterName:pause-169801 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:35:19.467084   39107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:35:19.467143   39107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:35:19.522853   39107 cri.go:89] found id: "9134727b4fd6d5738e8f70d745af726857f4bf4548254f6ecd53959a98d15c4e"
	I0919 17:35:19.522883   39107 cri.go:89] found id: "bd201a673fe1a3b0a5d032f7862b82126778d4e21547880e4b94c9f83f47a6d3"
	I0919 17:35:19.522890   39107 cri.go:89] found id: "d2fba165bbe945eecaac011fbba5cc26ce0f86e9dc0e134610aeef3a8e6ce99d"
	I0919 17:35:19.522897   39107 cri.go:89] found id: "f023a25e65b2cbdf272da1990a163a583f3cb86b00cec0edead3ecf49df39dbd"
	I0919 17:35:19.522902   39107 cri.go:89] found id: "0cc63aeafcd3dbde45a8c4d798f182fa9d92537072f784a0968dd262139be186"
	I0919 17:35:19.522908   39107 cri.go:89] found id: "894ffd669de7c8477e5c167a07838a62cd93530f4a5b1a48aeb1d4cac871730a"
	I0919 17:35:19.522914   39107 cri.go:89] found id: ""
	I0919 17:35:19.522961   39107 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-169801 -n pause-169801
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-169801 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-169801 logs -n 25: (1.409495741s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:32 UTC |
	| start   | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:33 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-259423             | offline-crio-259423       | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:32 UTC |
	| start   | -p pause-169801 --memory=2048      | pause-169801              | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:34 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:33 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-435929          | running-upgrade-435929    | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-435929          | running-upgrade-435929    | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:33 UTC |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC |                     |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20          |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:34 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:33 UTC |
	| start   | -p force-systemd-flag-212057       | force-systemd-flag-212057 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:34 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:35 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-169801                    | pause-169801              | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:36 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-212057 ssh cat  | force-systemd-flag-212057 | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-212057       | force-systemd-flag-212057 | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	| start   | -p cert-expiration-142729          | cert-expiration-142729    | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:36 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-359189          | stopped-upgrade-359189    | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-372421 sudo        | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC | 19 Sep 23 17:35 UTC |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-359189          | stopped-upgrade-359189    | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC | 19 Sep 23 17:36 UTC |
	| start   | -p cert-options-512928             | cert-options-512928       | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:36:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:36:23.694938   40255 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:36:23.695239   40255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:23.695244   40255 out.go:309] Setting ErrFile to fd 2...
	I0919 17:36:23.695247   40255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:23.695411   40255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:36:23.695986   40255 out.go:303] Setting JSON to false
	I0919 17:36:23.697003   40255 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4734,"bootTime":1695140250,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:36:23.697072   40255 start.go:138] virtualization: kvm guest
	I0919 17:36:23.699374   40255 out.go:177] * [cert-options-512928] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:36:23.700913   40255 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:36:23.702435   40255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:36:23.700953   40255 notify.go:220] Checking for updates...
	I0919 17:36:23.705182   40255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:36:23.706684   40255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:36:23.708185   40255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:36:23.709576   40255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:36:23.712244   40255 config.go:182] Loaded profile config "NoKubernetes-372421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0919 17:36:23.712741   40255 config.go:182] Loaded profile config "cert-expiration-142729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:23.712934   40255 config.go:182] Loaded profile config "pause-169801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:23.713132   40255 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:36:23.749731   40255 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 17:36:23.751070   40255 start.go:298] selected driver: kvm2
	I0919 17:36:23.751076   40255 start.go:902] validating driver "kvm2" against <nil>
	I0919 17:36:23.751086   40255 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:36:23.751758   40255 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:36:23.751831   40255 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:36:23.767180   40255 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:36:23.767231   40255 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 17:36:23.767489   40255 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 17:36:23.767517   40255 cni.go:84] Creating CNI manager for ""
	I0919 17:36:23.767534   40255 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:36:23.767545   40255 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 17:36:23.767555   40255 start_flags.go:321] config:
	{Name:cert-options-512928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-options-512928 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168
.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:36:23.767736   40255 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:36:23.769624   40255 out.go:177] * Starting control plane node cert-options-512928 in cluster cert-options-512928
	I0919 17:36:23.353146   39107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:36:23.485031   39107 node_ready.go:35] waiting up to 6m0s for node "pause-169801" to be "Ready" ...
	I0919 17:36:23.485435   39107 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:36:23.488685   39107 node_ready.go:49] node "pause-169801" has status "Ready":"True"
	I0919 17:36:23.488709   39107 node_ready.go:38] duration metric: took 3.645592ms waiting for node "pause-169801" to be "Ready" ...
	I0919 17:36:23.488719   39107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:36:23.495394   39107 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7sskx" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:23.715396   39107 pod_ready.go:92] pod "coredns-5dd5756b68-7sskx" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:23.715423   39107 pod_ready.go:81] duration metric: took 220.005535ms waiting for pod "coredns-5dd5756b68-7sskx" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:23.715438   39107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:22.382425   39613 main.go:141] libmachine: (NoKubernetes-372421) Waiting to get IP...
	I0919 17:36:22.383277   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:22.383699   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:22.383742   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:22.383653   40071 retry.go:31] will retry after 276.032141ms: waiting for machine to come up
	I0919 17:36:23.135702   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:23.136172   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:23.136194   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:23.136115   40071 retry.go:31] will retry after 325.473638ms: waiting for machine to come up
	I0919 17:36:23.463663   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:23.464195   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:23.464219   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:23.464114   40071 retry.go:31] will retry after 358.03404ms: waiting for machine to come up
	I0919 17:36:23.823606   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:23.824118   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:23.824137   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:23.824073   40071 retry.go:31] will retry after 419.353757ms: waiting for machine to come up
	I0919 17:36:24.244560   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:24.245054   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:24.245070   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:24.245003   40071 retry.go:31] will retry after 491.008265ms: waiting for machine to come up
	I0919 17:36:24.737752   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:24.738182   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:24.738199   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:24.738134   40071 retry.go:31] will retry after 619.294145ms: waiting for machine to come up
	I0919 17:36:25.359045   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:25.359449   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:25.359466   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:25.359417   40071 retry.go:31] will retry after 883.1921ms: waiting for machine to come up
	I0919 17:36:26.243747   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:26.244245   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:26.244259   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:26.244212   40071 retry.go:31] will retry after 931.601448ms: waiting for machine to come up
	I0919 17:36:24.114227   39107 pod_ready.go:92] pod "etcd-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:24.114257   39107 pod_ready.go:81] duration metric: took 398.809975ms waiting for pod "etcd-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.114271   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.515021   39107 pod_ready.go:92] pod "kube-apiserver-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:24.515045   39107 pod_ready.go:81] duration metric: took 400.765268ms waiting for pod "kube-apiserver-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.515055   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.914226   39107 pod_ready.go:92] pod "kube-controller-manager-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:24.914249   39107 pod_ready.go:81] duration metric: took 399.18718ms waiting for pod "kube-controller-manager-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.914259   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758ss" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.314442   39107 pod_ready.go:92] pod "kube-proxy-758ss" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:25.314467   39107 pod_ready.go:81] duration metric: took 400.201542ms waiting for pod "kube-proxy-758ss" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.314496   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.714134   39107 pod_ready.go:92] pod "kube-scheduler-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:25.714162   39107 pod_ready.go:81] duration metric: took 399.658189ms waiting for pod "kube-scheduler-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.714174   39107 pod_ready.go:38] duration metric: took 2.22544286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:36:25.714193   39107 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:36:25.714250   39107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:36:25.728697   39107 api_server.go:72] duration metric: took 2.378274384s to wait for apiserver process to appear ...
	I0919 17:36:25.728724   39107 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:36:25.728742   39107 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0919 17:36:25.734316   39107 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0919 17:36:25.735565   39107 api_server.go:141] control plane version: v1.28.2
	I0919 17:36:25.735587   39107 api_server.go:131] duration metric: took 6.8565ms to wait for apiserver health ...
	I0919 17:36:25.735595   39107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:36:25.916263   39107 system_pods.go:59] 6 kube-system pods found
	I0919 17:36:25.916289   39107 system_pods.go:61] "coredns-5dd5756b68-7sskx" [409754a7-e951-45ac-beea-ca183d856092] Running
	I0919 17:36:25.916294   39107 system_pods.go:61] "etcd-pause-169801" [12df498d-e471-456f-a15a-bcfc3ab5ecbd] Running
	I0919 17:36:25.916298   39107 system_pods.go:61] "kube-apiserver-pause-169801" [b62dda67-2d85-47db-8457-4ff45b24b618] Running
	I0919 17:36:25.916303   39107 system_pods.go:61] "kube-controller-manager-pause-169801" [1b2efcc4-05be-4adc-aad4-5a8081270dc7] Running
	I0919 17:36:25.916307   39107 system_pods.go:61] "kube-proxy-758ss" [f22589b0-519f-4fa0-ba97-e13745761263] Running
	I0919 17:36:25.916313   39107 system_pods.go:61] "kube-scheduler-pause-169801" [1533c6dc-55e9-4beb-b217-4b6ba5ab002c] Running
	I0919 17:36:25.916319   39107 system_pods.go:74] duration metric: took 180.719066ms to wait for pod list to return data ...
	I0919 17:36:25.916326   39107 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:36:26.114026   39107 default_sa.go:45] found service account: "default"
	I0919 17:36:26.114050   39107 default_sa.go:55] duration metric: took 197.719046ms for default service account to be created ...
	I0919 17:36:26.114058   39107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:36:26.317689   39107 system_pods.go:86] 6 kube-system pods found
	I0919 17:36:26.317715   39107 system_pods.go:89] "coredns-5dd5756b68-7sskx" [409754a7-e951-45ac-beea-ca183d856092] Running
	I0919 17:36:26.317720   39107 system_pods.go:89] "etcd-pause-169801" [12df498d-e471-456f-a15a-bcfc3ab5ecbd] Running
	I0919 17:36:26.317724   39107 system_pods.go:89] "kube-apiserver-pause-169801" [b62dda67-2d85-47db-8457-4ff45b24b618] Running
	I0919 17:36:26.317728   39107 system_pods.go:89] "kube-controller-manager-pause-169801" [1b2efcc4-05be-4adc-aad4-5a8081270dc7] Running
	I0919 17:36:26.317731   39107 system_pods.go:89] "kube-proxy-758ss" [f22589b0-519f-4fa0-ba97-e13745761263] Running
	I0919 17:36:26.317735   39107 system_pods.go:89] "kube-scheduler-pause-169801" [1533c6dc-55e9-4beb-b217-4b6ba5ab002c] Running
	I0919 17:36:26.317742   39107 system_pods.go:126] duration metric: took 203.678253ms to wait for k8s-apps to be running ...
	I0919 17:36:26.317750   39107 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:36:26.317798   39107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:36:26.331927   39107 system_svc.go:56] duration metric: took 14.171003ms WaitForService to wait for kubelet.
	I0919 17:36:26.331948   39107 kubeadm.go:581] duration metric: took 2.981534027s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:36:26.331980   39107 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:36:26.514095   39107 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:36:26.514129   39107 node_conditions.go:123] node cpu capacity is 2
	I0919 17:36:26.514142   39107 node_conditions.go:105] duration metric: took 182.156299ms to run NodePressure ...
	I0919 17:36:26.514155   39107 start.go:228] waiting for startup goroutines ...
	I0919 17:36:26.514164   39107 start.go:233] waiting for cluster config update ...
	I0919 17:36:26.514177   39107 start.go:242] writing updated cluster config ...
	I0919 17:36:26.514525   39107 ssh_runner.go:195] Run: rm -f paused
	I0919 17:36:26.561721   39107 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:36:26.563842   39107 out.go:177] * Done! kubectl is now configured to use "pause-169801" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:33:16 UTC, ends at Tue 2023-09-19 17:36:27 UTC. --
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.212807754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144987212792420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=1582d90a-a8b6-4496-83b3-b12242c7db82 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.213372494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9bae6e3a-2f37-423a-9d24-bc72ff6797fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.213450894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9bae6e3a-2f37-423a-9d24-bc72ff6797fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.213874089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9bae6e3a-2f37-423a-9d24-bc72ff6797fc name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.266162853Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fcb4a65e-87b7-44cb-acbd-9dd9c1e0866f name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.266249562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fcb4a65e-87b7-44cb-acbd-9dd9c1e0866f name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.268309183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6b33ab38-b037-40b9-ab88-842767998792 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.268787684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144987268770536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=6b33ab38-b037-40b9-ab88-842767998792 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.269750350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e2c0949-1ccd-46a5-9bf8-173bf8466bac name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.269833450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e2c0949-1ccd-46a5-9bf8-173bf8466bac name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.270083644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e2c0949-1ccd-46a5-9bf8-173bf8466bac name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.314948651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f2146c86-fc32-45ec-ac08-6a4d6a68f955 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.315049507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f2146c86-fc32-45ec-ac08-6a4d6a68f955 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.319719857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e090f047-2d03-4bc2-8dd6-4d0ce5733d51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.320231023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144987320210424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=e090f047-2d03-4bc2-8dd6-4d0ce5733d51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.321403723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e2f589bf-b786-4588-a0c5-09a17b012b57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.321454470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e2f589bf-b786-4588-a0c5-09a17b012b57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.321904106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e2f589bf-b786-4588-a0c5-09a17b012b57 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.364698840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=48b86e99-93e7-4db4-a64e-8f45649ebea3 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.364762786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=48b86e99-93e7-4db4-a64e-8f45649ebea3 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.367247457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f6c4e7fe-a468-46d9-915e-49aa4af26d10 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.367615194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144987367601949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=f6c4e7fe-a468-46d9-915e-49aa4af26d10 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.368336166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9a83573-c127-4127-90bc-aae7aacf7e05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.368408776Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9a83573-c127-4127-90bc-aae7aacf7e05 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:27 pause-169801 crio[2732]: time="2023-09-19 17:36:27.368765749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9a83573-c127-4127-90bc-aae7aacf7e05 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	084d99db51fc5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   2                   91928b780f762       coredns-5dd5756b68-7sskx
	d2cc21971a5fa       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   18 seconds ago       Running             kube-proxy                2                   b47445bbb89b9       kube-proxy-758ss
	14546ad7fd1d9       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   24 seconds ago       Running             kube-apiserver            3                   489699a813bfd       kube-apiserver-pause-169801
	597729fb7fc2d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago       Running             etcd                      3                   45befc0316222       etcd-pause-169801
	4e4d161aaa6a5       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   24 seconds ago       Running             kube-controller-manager   3                   57e1ce1cf7d6a       kube-controller-manager-pause-169801
	2d1a6e275bd3a       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   24 seconds ago       Running             kube-scheduler            3                   c43d2da3228e0       kube-scheduler-pause-169801
	3b54bf0f57296       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   49 seconds ago       Exited              etcd                      2                   45befc0316222       etcd-pause-169801
	b8526c6c5b0f7       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   51 seconds ago       Exited              kube-controller-manager   2                   57e1ce1cf7d6a       kube-controller-manager-pause-169801
	a893fa5bb1455       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   52 seconds ago       Exited              kube-scheduler            2                   c43d2da3228e0       kube-scheduler-pause-169801
	ee3fa8fbdd88d       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   59 seconds ago       Exited              kube-apiserver            2                   489699a813bfd       kube-apiserver-pause-169801
	12b8ee660f025       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   About a minute ago   Exited              kube-proxy                1                   b47445bbb89b9       kube-proxy-758ss
	66189fd3696e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   91928b780f762       coredns-5dd5756b68-7sskx
	
	* 
	* ==> coredns [084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54636 - 52957 "HINFO IN 3475859521479418701.1959087963278092099. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015429337s
	
	* 
	* ==> coredns [66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45625 - 14791 "HINFO IN 7341575377565025544.6655293081312810800. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016330555s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-169801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-169801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=pause-169801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_33_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:33:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-169801
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:36:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    pause-169801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab18f5faf87b4f8f9fe13be2a6396937
	  System UUID:                ab18f5fa-f87b-4f8f-9fe1-3be2a6396937
	  Boot ID:                    4d1e504d-f388-4880-b5b3-37b59688735a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-7sskx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m21s
	  kube-system                 etcd-pause-169801                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-apiserver-pause-169801             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-controller-manager-pause-169801    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 kube-proxy-758ss                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-scheduler-pause-169801             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m18s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m42s (x8 over 2m43s)  kubelet          Node pause-169801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s (x8 over 2m43s)  kubelet          Node pause-169801 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s (x7 over 2m43s)  kubelet          Node pause-169801 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m33s                  kubelet          Node pause-169801 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m33s                  kubelet          Node pause-169801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s                  kubelet          Node pause-169801 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m33s                  kubelet          Node pause-169801 status is now: NodeReady
	  Normal  Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m22s                  node-controller  Node pause-169801 event: Registered Node pause-169801 in Controller
	  Normal  Starting                 25s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)      kubelet          Node pause-169801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)      kubelet          Node pause-169801 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)      kubelet          Node pause-169801 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                     node-controller  Node pause-169801 event: Registered Node pause-169801 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.711818] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.423770] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152141] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.084995] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.798807] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.188704] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.195670] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.165538] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.310525] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +12.075960] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +9.338337] systemd-fstab-generator[1272]: Ignoring "noauto" for root device
	[Sep19 17:34] kauditd_printk_skb: 24 callbacks suppressed
	[Sep19 17:35] systemd-fstab-generator[2457]: Ignoring "noauto" for root device
	[  +0.313392] systemd-fstab-generator[2491]: Ignoring "noauto" for root device
	[  +0.307929] systemd-fstab-generator[2515]: Ignoring "noauto" for root device
	[  +0.361399] systemd-fstab-generator[2599]: Ignoring "noauto" for root device
	[  +0.388757] systemd-fstab-generator[2624]: Ignoring "noauto" for root device
	[Sep19 17:36] systemd-fstab-generator[3812]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247] <==
	* {"level":"info","ts":"2023-09-19T17:35:38.755368Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:35:40.140707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-19T17:35:40.140772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:35:40.140806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 2"}
	{"level":"info","ts":"2023-09-19T17:35:40.140819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.140824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.140833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.140843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.147298Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"683e1d26ac7e3123","local-member-attributes":"{Name:pause-169801 ClientURLs:[https://192.168.39.19:2379]}","request-path":"/0/members/683e1d26ac7e3123/attributes","cluster-id":"3f32d84448c0bab8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:35:40.147315Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:35:40.147691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:35:40.148986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.19:2379"}
	{"level":"info","ts":"2023-09-19T17:35:40.148996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:35:40.149141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:35:40.14918Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:35:44.966015Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-19T17:35:44.966186Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-169801","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	{"level":"warn","ts":"2023-09-19T17:35:44.966283Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T17:35:44.966381Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T17:35:45.008768Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T17:35:45.008856Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-19T17:35:45.008918Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"683e1d26ac7e3123","current-leader-member-id":"683e1d26ac7e3123"}
	{"level":"info","ts":"2023-09-19T17:35:45.019803Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:35:45.019912Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:35:45.019922Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-169801","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	
	* 
	* ==> etcd [597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531] <==
	* {"level":"info","ts":"2023-09-19T17:36:05.476618Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:36:05.476696Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:36:05.476882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035)"}
	{"level":"info","ts":"2023-09-19T17:36:05.476979Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","added-peer-id":"683e1d26ac7e3123","added-peer-peer-urls":["https://192.168.39.19:2380"]}
	{"level":"info","ts":"2023-09-19T17:36:05.477076Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:36:05.477098Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:36:05.478434Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T17:36:05.482719Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"683e1d26ac7e3123","initial-advertise-peer-urls":["https://192.168.39.19:2380"],"listen-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T17:36:05.48278Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T17:36:05.482827Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:36:05.482834Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:36:06.42977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-19T17:36:06.429901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-19T17:36:06.42994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-09-19T17:36:06.429971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.429996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.430023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.430048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.435783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:36:06.43605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:36:06.43702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.19:2379"}
	{"level":"info","ts":"2023-09-19T17:36:06.435785Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"683e1d26ac7e3123","local-member-attributes":"{Name:pause-169801 ClientURLs:[https://192.168.39.19:2379]}","request-path":"/0/members/683e1d26ac7e3123/attributes","cluster-id":"3f32d84448c0bab8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:36:06.437447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:36:06.437487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:36:06.437872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:36:27 up 3 min,  0 users,  load average: 0.60, 0.40, 0.17
	Linux pause-169801 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa] <==
	* I0919 17:36:07.925803       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 17:36:07.925814       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0919 17:36:07.874207       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 17:36:08.002741       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 17:36:08.024797       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0919 17:36:08.024920       1 aggregator.go:166] initial CRD sync complete...
	I0919 17:36:08.024934       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 17:36:08.024940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 17:36:08.024946       1 cache.go:39] Caches are synced for autoregister controller
	I0919 17:36:08.050547       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0919 17:36:08.072944       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0919 17:36:08.073080       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:36:08.073494       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 17:36:08.073575       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 17:36:08.074558       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:36:08.075526       1 shared_informer.go:318] Caches are synced for configmaps
	E0919 17:36:08.087852       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 17:36:08.901818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 17:36:09.636911       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 17:36:09.649707       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 17:36:09.687944       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 17:36:09.718220       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:36:09.726272       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 17:36:20.454824       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 17:36:20.659908       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 17:35:55.250773       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 17:35:55.255528       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 17:35:55.406242       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7] <==
	* I0919 17:36:20.435434       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0919 17:36:20.435797       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-169801"
	I0919 17:36:20.435886       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0919 17:36:20.436029       1 event.go:307] "Event occurred" object="pause-169801" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-169801 event: Registered Node pause-169801 in Controller"
	I0919 17:36:20.437274       1 shared_informer.go:318] Caches are synced for ephemeral
	I0919 17:36:20.449931       1 shared_informer.go:318] Caches are synced for deployment
	I0919 17:36:20.453535       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0919 17:36:20.456225       1 shared_informer.go:318] Caches are synced for namespace
	I0919 17:36:20.461349       1 shared_informer.go:318] Caches are synced for daemon sets
	I0919 17:36:20.474521       1 shared_informer.go:318] Caches are synced for attach detach
	I0919 17:36:20.479277       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0919 17:36:20.484069       1 shared_informer.go:318] Caches are synced for endpoint
	I0919 17:36:20.485592       1 shared_informer.go:318] Caches are synced for stateful set
	I0919 17:36:20.488145       1 shared_informer.go:318] Caches are synced for GC
	I0919 17:36:20.488946       1 shared_informer.go:318] Caches are synced for cronjob
	I0919 17:36:20.503019       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0919 17:36:20.503110       1 shared_informer.go:318] Caches are synced for persistent volume
	I0919 17:36:20.521498       1 shared_informer.go:318] Caches are synced for crt configmap
	I0919 17:36:20.559152       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0919 17:36:20.569793       1 shared_informer.go:318] Caches are synced for resource quota
	I0919 17:36:20.582382       1 shared_informer.go:318] Caches are synced for HPA
	I0919 17:36:20.611885       1 shared_informer.go:318] Caches are synced for resource quota
	I0919 17:36:20.966114       1 shared_informer.go:318] Caches are synced for garbage collector
	I0919 17:36:20.966171       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0919 17:36:21.012100       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29] <==
	* I0919 17:35:37.173768       1 serving.go:348] Generated self-signed cert in-memory
	I0919 17:35:37.429556       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I0919 17:35:37.429620       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:35:37.431425       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 17:35:37.431742       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 17:35:37.432113       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0919 17:35:37.432284       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de] <==
	* I0919 17:35:20.719756       1 server_others.go:69] "Using iptables proxy"
	E0919 17:35:20.722920       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	E0919 17:35:21.779790       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	E0919 17:35:23.891120       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	E0919 17:35:28.688287       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145] <==
	* I0919 17:36:09.178200       1 server_others.go:69] "Using iptables proxy"
	I0919 17:36:09.191083       1 node.go:141] Successfully retrieved node IP: 192.168.39.19
	I0919 17:36:09.264777       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:36:09.264854       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:36:09.272768       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:36:09.272881       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:36:09.273120       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:36:09.273132       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:36:09.274244       1 config.go:188] "Starting service config controller"
	I0919 17:36:09.274311       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:36:09.274338       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:36:09.274342       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:36:09.276082       1 config.go:315] "Starting node config controller"
	I0919 17:36:09.276122       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:36:09.375012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:36:09.375019       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:36:09.376497       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d] <==
	* I0919 17:36:05.889524       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:36:07.938176       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:36:07.938231       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:36:07.938243       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:36:07.938249       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:36:07.995491       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:36:07.995539       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:36:07.997438       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 17:36:08.004933       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 17:36:08.005070       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 17:36:08.005195       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 17:36:08.106464       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d] <==
	* I0919 17:35:36.017811       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:35:45.561114       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:35:45.561199       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:35:45.561214       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:35:45.561224       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:35:45.589286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:35:45.589376       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:35:45.591481       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	E0919 17:35:45.591597       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0919 17:35:45.591827       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:33:16 UTC, ends at Tue 2023-09-19 17:36:28 UTC. --
	Sep 19 17:36:03 pause-169801 kubelet[3818]: W0919 17:36:03.681075    3818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:03 pause-169801 kubelet[3818]: E0919 17:36:03.681139    3818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:03 pause-169801 kubelet[3818]: E0919 17:36:03.964415    3818 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-169801?timeout=10s\": dial tcp 192.168.39.19:8443: connect: connection refused" interval="1.6s"
	Sep 19 17:36:04 pause-169801 kubelet[3818]: I0919 17:36:04.070562    3818 kubelet_node_status.go:70] "Attempting to register node" node="pause-169801"
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.071043    3818 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.19:8443: connect: connection refused" node="pause-169801"
	Sep 19 17:36:04 pause-169801 kubelet[3818]: W0919 17:36:04.078166    3818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-169801&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.078224    3818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-169801&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: W0919 17:36:04.095067    3818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.095117    3818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.291784    3818 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-169801.17865d5cb636c29a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-169801", UID:"pause-169801", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-169801"}, FirstTimestamp:time.Date(2023, time.September, 19, 17, 36, 2, 526986906, time.Local), LastTimestamp:time.Dat
e(2023, time.September, 19, 17, 36, 2, 526986906, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-169801"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.19:8443: connect: connection refused'(may retry after sleeping)
	Sep 19 17:36:05 pause-169801 kubelet[3818]: I0919 17:36:05.672759    3818 kubelet_node_status.go:70] "Attempting to register node" node="pause-169801"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.089943    3818 kubelet_node_status.go:108] "Node was previously registered" node="pause-169801"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.090030    3818 kubelet_node_status.go:73] "Successfully registered node" node="pause-169801"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.092824    3818 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.093976    3818 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.528771    3818 apiserver.go:52] "Watching apiserver"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.532722    3818 topology_manager.go:215] "Topology Admit Handler" podUID="409754a7-e951-45ac-beea-ca183d856092" podNamespace="kube-system" podName="coredns-5dd5756b68-7sskx"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.532916    3818 topology_manager.go:215] "Topology Admit Handler" podUID="f22589b0-519f-4fa0-ba97-e13745761263" podNamespace="kube-system" podName="kube-proxy-758ss"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.553000    3818 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.583956    3818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f22589b0-519f-4fa0-ba97-e13745761263-xtables-lock\") pod \"kube-proxy-758ss\" (UID: \"f22589b0-519f-4fa0-ba97-e13745761263\") " pod="kube-system/kube-proxy-758ss"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.584041    3818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f22589b0-519f-4fa0-ba97-e13745761263-lib-modules\") pod \"kube-proxy-758ss\" (UID: \"f22589b0-519f-4fa0-ba97-e13745761263\") " pod="kube-system/kube-proxy-758ss"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.833978    3818 scope.go:117] "RemoveContainer" containerID="12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.834823    3818 scope.go:117] "RemoveContainer" containerID="66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538"
	Sep 19 17:36:10 pause-169801 kubelet[3818]: I0919 17:36:10.777170    3818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 17:36:12 pause-169801 kubelet[3818]: I0919 17:36:12.571368    3818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-169801 -n pause-169801
helpers_test.go:261: (dbg) Run:  kubectl --context pause-169801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-169801 -n pause-169801
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-169801 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-169801 logs -n 25: (1.411523083s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:32 UTC |
	| start   | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:33 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-259423             | offline-crio-259423       | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:32 UTC |
	| start   | -p pause-169801 --memory=2048      | pause-169801              | jenkins | v1.31.2 | 19 Sep 23 17:32 UTC | 19 Sep 23 17:34 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:33 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-435929          | running-upgrade-435929    | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-435929          | running-upgrade-435929    | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:33 UTC |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC |                     |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20          |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:34 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-159716       | kubernetes-upgrade-159716 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:33 UTC |
	| start   | -p force-systemd-flag-212057       | force-systemd-flag-212057 | jenkins | v1.31.2 | 19 Sep 23 17:33 UTC | 19 Sep 23 17:34 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:35 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-169801                    | pause-169801              | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:36 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-212057 ssh cat  | force-systemd-flag-212057 | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-212057       | force-systemd-flag-212057 | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:34 UTC |
	| start   | -p cert-expiration-142729          | cert-expiration-142729    | jenkins | v1.31.2 | 19 Sep 23 17:34 UTC | 19 Sep 23 17:36 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-359189          | stopped-upgrade-359189    | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-372421 sudo        | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC | 19 Sep 23 17:35 UTC |
	| start   | -p NoKubernetes-372421             | NoKubernetes-372421       | jenkins | v1.31.2 | 19 Sep 23 17:35 UTC |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-359189          | stopped-upgrade-359189    | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC | 19 Sep 23 17:36 UTC |
	| start   | -p cert-options-512928             | cert-options-512928       | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:36:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:36:23.694938   40255 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:36:23.695239   40255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:23.695244   40255 out.go:309] Setting ErrFile to fd 2...
	I0919 17:36:23.695247   40255 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:23.695411   40255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:36:23.695986   40255 out.go:303] Setting JSON to false
	I0919 17:36:23.697003   40255 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4734,"bootTime":1695140250,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:36:23.697072   40255 start.go:138] virtualization: kvm guest
	I0919 17:36:23.699374   40255 out.go:177] * [cert-options-512928] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:36:23.700913   40255 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:36:23.702435   40255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:36:23.700953   40255 notify.go:220] Checking for updates...
	I0919 17:36:23.705182   40255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:36:23.706684   40255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:36:23.708185   40255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:36:23.709576   40255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:36:23.712244   40255 config.go:182] Loaded profile config "NoKubernetes-372421": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0919 17:36:23.712741   40255 config.go:182] Loaded profile config "cert-expiration-142729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:23.712934   40255 config.go:182] Loaded profile config "pause-169801": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:23.713132   40255 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:36:23.749731   40255 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 17:36:23.751070   40255 start.go:298] selected driver: kvm2
	I0919 17:36:23.751076   40255 start.go:902] validating driver "kvm2" against <nil>
	I0919 17:36:23.751086   40255 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:36:23.751758   40255 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:36:23.751831   40255 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:36:23.767180   40255 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:36:23.767231   40255 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 17:36:23.767489   40255 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 17:36:23.767517   40255 cni.go:84] Creating CNI manager for ""
	I0919 17:36:23.767534   40255 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:36:23.767545   40255 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 17:36:23.767555   40255 start_flags.go:321] config:
	{Name:cert-options-512928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:cert-options-512928 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168
.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:36:23.767736   40255 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:36:23.769624   40255 out.go:177] * Starting control plane node cert-options-512928 in cluster cert-options-512928
	I0919 17:36:23.353146   39107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:36:23.485031   39107 node_ready.go:35] waiting up to 6m0s for node "pause-169801" to be "Ready" ...
	I0919 17:36:23.485435   39107 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:36:23.488685   39107 node_ready.go:49] node "pause-169801" has status "Ready":"True"
	I0919 17:36:23.488709   39107 node_ready.go:38] duration metric: took 3.645592ms waiting for node "pause-169801" to be "Ready" ...
	I0919 17:36:23.488719   39107 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:36:23.495394   39107 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7sskx" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:23.715396   39107 pod_ready.go:92] pod "coredns-5dd5756b68-7sskx" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:23.715423   39107 pod_ready.go:81] duration metric: took 220.005535ms waiting for pod "coredns-5dd5756b68-7sskx" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:23.715438   39107 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:22.382425   39613 main.go:141] libmachine: (NoKubernetes-372421) Waiting to get IP...
	I0919 17:36:22.383277   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:22.383699   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:22.383742   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:22.383653   40071 retry.go:31] will retry after 276.032141ms: waiting for machine to come up
	I0919 17:36:23.135702   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:23.136172   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:23.136194   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:23.136115   40071 retry.go:31] will retry after 325.473638ms: waiting for machine to come up
	I0919 17:36:23.463663   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:23.464195   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:23.464219   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:23.464114   40071 retry.go:31] will retry after 358.03404ms: waiting for machine to come up
	I0919 17:36:23.823606   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:23.824118   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:23.824137   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:23.824073   40071 retry.go:31] will retry after 419.353757ms: waiting for machine to come up
	I0919 17:36:24.244560   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:24.245054   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:24.245070   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:24.245003   40071 retry.go:31] will retry after 491.008265ms: waiting for machine to come up
	I0919 17:36:24.737752   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:24.738182   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:24.738199   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:24.738134   40071 retry.go:31] will retry after 619.294145ms: waiting for machine to come up
	I0919 17:36:25.359045   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:25.359449   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:25.359466   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:25.359417   40071 retry.go:31] will retry after 883.1921ms: waiting for machine to come up
	I0919 17:36:26.243747   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | domain NoKubernetes-372421 has defined MAC address 52:54:00:20:53:c7 in network mk-NoKubernetes-372421
	I0919 17:36:26.244245   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | unable to find current IP address of domain NoKubernetes-372421 in network mk-NoKubernetes-372421
	I0919 17:36:26.244259   39613 main.go:141] libmachine: (NoKubernetes-372421) DBG | I0919 17:36:26.244212   40071 retry.go:31] will retry after 931.601448ms: waiting for machine to come up
	I0919 17:36:24.114227   39107 pod_ready.go:92] pod "etcd-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:24.114257   39107 pod_ready.go:81] duration metric: took 398.809975ms waiting for pod "etcd-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.114271   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.515021   39107 pod_ready.go:92] pod "kube-apiserver-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:24.515045   39107 pod_ready.go:81] duration metric: took 400.765268ms waiting for pod "kube-apiserver-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.515055   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.914226   39107 pod_ready.go:92] pod "kube-controller-manager-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:24.914249   39107 pod_ready.go:81] duration metric: took 399.18718ms waiting for pod "kube-controller-manager-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:24.914259   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758ss" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.314442   39107 pod_ready.go:92] pod "kube-proxy-758ss" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:25.314467   39107 pod_ready.go:81] duration metric: took 400.201542ms waiting for pod "kube-proxy-758ss" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.314496   39107 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.714134   39107 pod_ready.go:92] pod "kube-scheduler-pause-169801" in "kube-system" namespace has status "Ready":"True"
	I0919 17:36:25.714162   39107 pod_ready.go:81] duration metric: took 399.658189ms waiting for pod "kube-scheduler-pause-169801" in "kube-system" namespace to be "Ready" ...
	I0919 17:36:25.714174   39107 pod_ready.go:38] duration metric: took 2.22544286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:36:25.714193   39107 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:36:25.714250   39107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:36:25.728697   39107 api_server.go:72] duration metric: took 2.378274384s to wait for apiserver process to appear ...
	I0919 17:36:25.728724   39107 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:36:25.728742   39107 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I0919 17:36:25.734316   39107 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I0919 17:36:25.735565   39107 api_server.go:141] control plane version: v1.28.2
	I0919 17:36:25.735587   39107 api_server.go:131] duration metric: took 6.8565ms to wait for apiserver health ...
	I0919 17:36:25.735595   39107 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:36:25.916263   39107 system_pods.go:59] 6 kube-system pods found
	I0919 17:36:25.916289   39107 system_pods.go:61] "coredns-5dd5756b68-7sskx" [409754a7-e951-45ac-beea-ca183d856092] Running
	I0919 17:36:25.916294   39107 system_pods.go:61] "etcd-pause-169801" [12df498d-e471-456f-a15a-bcfc3ab5ecbd] Running
	I0919 17:36:25.916298   39107 system_pods.go:61] "kube-apiserver-pause-169801" [b62dda67-2d85-47db-8457-4ff45b24b618] Running
	I0919 17:36:25.916303   39107 system_pods.go:61] "kube-controller-manager-pause-169801" [1b2efcc4-05be-4adc-aad4-5a8081270dc7] Running
	I0919 17:36:25.916307   39107 system_pods.go:61] "kube-proxy-758ss" [f22589b0-519f-4fa0-ba97-e13745761263] Running
	I0919 17:36:25.916313   39107 system_pods.go:61] "kube-scheduler-pause-169801" [1533c6dc-55e9-4beb-b217-4b6ba5ab002c] Running
	I0919 17:36:25.916319   39107 system_pods.go:74] duration metric: took 180.719066ms to wait for pod list to return data ...
	I0919 17:36:25.916326   39107 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:36:26.114026   39107 default_sa.go:45] found service account: "default"
	I0919 17:36:26.114050   39107 default_sa.go:55] duration metric: took 197.719046ms for default service account to be created ...
	I0919 17:36:26.114058   39107 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:36:26.317689   39107 system_pods.go:86] 6 kube-system pods found
	I0919 17:36:26.317715   39107 system_pods.go:89] "coredns-5dd5756b68-7sskx" [409754a7-e951-45ac-beea-ca183d856092] Running
	I0919 17:36:26.317720   39107 system_pods.go:89] "etcd-pause-169801" [12df498d-e471-456f-a15a-bcfc3ab5ecbd] Running
	I0919 17:36:26.317724   39107 system_pods.go:89] "kube-apiserver-pause-169801" [b62dda67-2d85-47db-8457-4ff45b24b618] Running
	I0919 17:36:26.317728   39107 system_pods.go:89] "kube-controller-manager-pause-169801" [1b2efcc4-05be-4adc-aad4-5a8081270dc7] Running
	I0919 17:36:26.317731   39107 system_pods.go:89] "kube-proxy-758ss" [f22589b0-519f-4fa0-ba97-e13745761263] Running
	I0919 17:36:26.317735   39107 system_pods.go:89] "kube-scheduler-pause-169801" [1533c6dc-55e9-4beb-b217-4b6ba5ab002c] Running
	I0919 17:36:26.317742   39107 system_pods.go:126] duration metric: took 203.678253ms to wait for k8s-apps to be running ...
	I0919 17:36:26.317750   39107 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:36:26.317798   39107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:36:26.331927   39107 system_svc.go:56] duration metric: took 14.171003ms WaitForService to wait for kubelet.
	I0919 17:36:26.331948   39107 kubeadm.go:581] duration metric: took 2.981534027s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:36:26.331980   39107 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:36:26.514095   39107 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:36:26.514129   39107 node_conditions.go:123] node cpu capacity is 2
	I0919 17:36:26.514142   39107 node_conditions.go:105] duration metric: took 182.156299ms to run NodePressure ...
	I0919 17:36:26.514155   39107 start.go:228] waiting for startup goroutines ...
	I0919 17:36:26.514164   39107 start.go:233] waiting for cluster config update ...
	I0919 17:36:26.514177   39107 start.go:242] writing updated cluster config ...
	I0919 17:36:26.514525   39107 ssh_runner.go:195] Run: rm -f paused
	I0919 17:36:26.561721   39107 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:36:26.563842   39107 out.go:177] * Done! kubectl is now configured to use "pause-169801" cluster and "default" namespace by default
	I0919 17:36:23.771107   40255 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:36:23.771143   40255 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 17:36:23.771163   40255 cache.go:57] Caching tarball of preloaded images
	I0919 17:36:23.771242   40255 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:36:23.771259   40255 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:36:23.771359   40255 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-options-512928/config.json ...
	I0919 17:36:23.771372   40255 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-options-512928/config.json: {Name:mk12a91b04ff8eaa0b4e1576041f04cd31a91c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:36:23.771532   40255 start.go:365] acquiring machines lock for cert-options-512928: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:33:16 UTC, ends at Tue 2023-09-19 17:36:29 UTC. --
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.218177062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144989218162748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=4e6b6155-84e4-45cf-b6a7-974f9fa78ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.218599787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1690a26c-d371-44ee-b5a3-0ff7f4f853ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.218754262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1690a26c-d371-44ee-b5a3-0ff7f4f853ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.221009702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1690a26c-d371-44ee-b5a3-0ff7f4f853ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.268480971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8c5b8506-0204-4952-a58b-bee6500be6ea name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.268604019Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8c5b8506-0204-4952-a58b-bee6500be6ea name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.270569057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=18abfa0d-d36f-4af4-ae13-902fa52abdd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.270979456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144989270964550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=18abfa0d-d36f-4af4-ae13-902fa52abdd0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.271769508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9494f39-ee04-4fc1-8b7e-262cdfc49552 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.271819287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a9494f39-ee04-4fc1-8b7e-262cdfc49552 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.272100240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a9494f39-ee04-4fc1-8b7e-262cdfc49552 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.311900464Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b08e5a9e-aa79-4181-a479-0ac6b19bbc4f name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.311983464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b08e5a9e-aa79-4181-a479-0ac6b19bbc4f name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.313911794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=364ac036-6ced-4f7c-8354-098a70dd7f20 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.314317498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144989314301849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=364ac036-6ced-4f7c-8354-098a70dd7f20 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.314941541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=55cb36fa-0087-4dc9-bb46-ac721dd50050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.315019081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=55cb36fa-0087-4dc9-bb46-ac721dd50050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.315271223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=55cb36fa-0087-4dc9-bb46-ac721dd50050 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.357207293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ed25775b-da4d-47cd-9050-d8fe80eb42fd name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.357294311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ed25775b-da4d-47cd-9050-d8fe80eb42fd name=/runtime.v1.RuntimeService/Version
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.359070509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=11738cb0-c4f1-4eaa-88a2-841b15572be1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.359476692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695144989359463353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=11738cb0-c4f1-4eaa-88a2-841b15572be1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.359941002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5f6357a-92c3-4a36-a8e9-bdf83309e02a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.360016531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5f6357a-92c3-4a36-a8e9-bdf83309e02a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:36:29 pause-169801 crio[2732]: time="2023-09-19 17:36:29.360253172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695144968880610937,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695144968862953157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695144963327943744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:
map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695144963262836081,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695144963335283586,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.
container.hash: 79a5885c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695144963296905813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247,PodSandboxId:45befc0316222b33b9146efb58a2cb4280f751cea0d405087e82e312691ca1df,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1695144938184050090,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189dd926a7ca268ae0fd7f37c7f8ba39,},Annotations:map[string]string{io.kubernetes.container.hash: c80f849c,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29,PodSandboxId:57e1ce1cf7d6a4dc64d8f4b128ff9bb779722b0b380f227142c83ea9bc6fd5da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_EXITED,CreatedAt:1695144936184171098,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ec5ea96bd9b1bb33804ebcce1af5dfd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c
14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d,PodSandboxId:c43d2da3228e0ecc361654e40b6e778768909526e83ad0fbdee9886c51c25414,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_EXITED,CreatedAt:1695144935185395454,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4d53fce1507abead24352955a989a64,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51,PodSandboxId:489699a813bfdff82d76d594752f1bf6b4f06d05358eb340d8b0dbf6d1aee330,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_EXITED,CreatedAt:1695144928430924047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-169801,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f010d4c6a8e99d9866f0d1d72bc344de,},Annotations:map[string]string{io.kubernetes.container.hash: 79a5885c,io.kubernetes.container.res
tartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de,PodSandboxId:b47445bbb89b94815d7c6be291d03d728fab9d1e8a3774b809d73a1e5c85b50d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_EXITED,CreatedAt:1695144920533847152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-758ss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f22589b0-519f-4fa0-ba97-e13745761263,},Annotations:map[string]string{io.kubernetes.container.hash: cf0059d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538,PodSandboxId:91928b780f76299f760cdc4aa7775d610a30e881db7498258e9103addcd14c70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1695144920458396180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-7sskx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409754a7-e951-45ac-beea-ca183d856092,},Annotations:map[string]string{io.kubernetes.container.hash: 66d6fcf7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5f6357a-92c3-4a36-a8e9-bdf83309e02a name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	084d99db51fc5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago       Running             coredns                   2                   91928b780f762       coredns-5dd5756b68-7sskx
	d2cc21971a5fa       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   20 seconds ago       Running             kube-proxy                2                   b47445bbb89b9       kube-proxy-758ss
	14546ad7fd1d9       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   26 seconds ago       Running             kube-apiserver            3                   489699a813bfd       kube-apiserver-pause-169801
	597729fb7fc2d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   26 seconds ago       Running             etcd                      3                   45befc0316222       etcd-pause-169801
	4e4d161aaa6a5       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   26 seconds ago       Running             kube-controller-manager   3                   57e1ce1cf7d6a       kube-controller-manager-pause-169801
	2d1a6e275bd3a       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   26 seconds ago       Running             kube-scheduler            3                   c43d2da3228e0       kube-scheduler-pause-169801
	3b54bf0f57296       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   51 seconds ago       Exited              etcd                      2                   45befc0316222       etcd-pause-169801
	b8526c6c5b0f7       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   53 seconds ago       Exited              kube-controller-manager   2                   57e1ce1cf7d6a       kube-controller-manager-pause-169801
	a893fa5bb1455       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   54 seconds ago       Exited              kube-scheduler            2                   c43d2da3228e0       kube-scheduler-pause-169801
	ee3fa8fbdd88d       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   About a minute ago   Exited              kube-apiserver            2                   489699a813bfd       kube-apiserver-pause-169801
	12b8ee660f025       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   About a minute ago   Exited              kube-proxy                1                   b47445bbb89b9       kube-proxy-758ss
	66189fd3696e2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   1                   91928b780f762       coredns-5dd5756b68-7sskx
	
	* 
	* ==> coredns [084d99db51fc5101cee01e0596c752e6f70d6cdddd9c2ca28f972457feb19c49] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54636 - 52957 "HINFO IN 3475859521479418701.1959087963278092099. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015429337s
	
	* 
	* ==> coredns [66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45625 - 14791 "HINFO IN 7341575377565025544.6655293081312810800. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016330555s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-169801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-169801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=pause-169801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_33_54_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:33:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-169801
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:36:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:36:08 +0000   Tue, 19 Sep 2023 17:33:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    pause-169801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab18f5faf87b4f8f9fe13be2a6396937
	  System UUID:                ab18f5fa-f87b-4f8f-9fe1-3be2a6396937
	  Boot ID:                    4d1e504d-f388-4880-b5b3-37b59688735a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-7sskx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m23s
	  kube-system                 etcd-pause-169801                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-pause-169801             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-controller-manager-pause-169801    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-proxy-758ss                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-scheduler-pause-169801             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 20s                    kube-proxy       
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m44s (x8 over 2m45s)  kubelet          Node pause-169801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x8 over 2m45s)  kubelet          Node pause-169801 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x7 over 2m45s)  kubelet          Node pause-169801 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node pause-169801 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node pause-169801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node pause-169801 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m35s                  kubelet          Node pause-169801 status is now: NodeReady
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m24s                  node-controller  Node pause-169801 event: Registered Node pause-169801 in Controller
	  Normal  Starting                 27s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)      kubelet          Node pause-169801 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)      kubelet          Node pause-169801 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)      kubelet          Node pause-169801 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                     node-controller  Node pause-169801 event: Registered Node pause-169801 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.711818] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.423770] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152141] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.084995] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.798807] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.188704] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.195670] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.165538] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.310525] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +12.075960] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +9.338337] systemd-fstab-generator[1272]: Ignoring "noauto" for root device
	[Sep19 17:34] kauditd_printk_skb: 24 callbacks suppressed
	[Sep19 17:35] systemd-fstab-generator[2457]: Ignoring "noauto" for root device
	[  +0.313392] systemd-fstab-generator[2491]: Ignoring "noauto" for root device
	[  +0.307929] systemd-fstab-generator[2515]: Ignoring "noauto" for root device
	[  +0.361399] systemd-fstab-generator[2599]: Ignoring "noauto" for root device
	[  +0.388757] systemd-fstab-generator[2624]: Ignoring "noauto" for root device
	[Sep19 17:36] systemd-fstab-generator[3812]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [3b54bf0f57296074ee1cd3d1cb9f9ebd618aaff928ec29683d267fd98cc7c247] <==
	* {"level":"info","ts":"2023-09-19T17:35:38.755368Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:35:40.140707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-19T17:35:40.140772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:35:40.140806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 2"}
	{"level":"info","ts":"2023-09-19T17:35:40.140819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.140824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.140833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.140843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-09-19T17:35:40.147298Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"683e1d26ac7e3123","local-member-attributes":"{Name:pause-169801 ClientURLs:[https://192.168.39.19:2379]}","request-path":"/0/members/683e1d26ac7e3123/attributes","cluster-id":"3f32d84448c0bab8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:35:40.147315Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:35:40.147691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:35:40.148986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.19:2379"}
	{"level":"info","ts":"2023-09-19T17:35:40.148996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:35:40.149141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:35:40.14918Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:35:44.966015Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-19T17:35:44.966186Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-169801","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	{"level":"warn","ts":"2023-09-19T17:35:44.966283Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T17:35:44.966381Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T17:35:45.008768Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-19T17:35:45.008856Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.19:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-19T17:35:45.008918Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"683e1d26ac7e3123","current-leader-member-id":"683e1d26ac7e3123"}
	{"level":"info","ts":"2023-09-19T17:35:45.019803Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:35:45.019912Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:35:45.019922Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-169801","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"]}
	
	* 
	* ==> etcd [597729fb7fc2d04b6bed1c17e9eebd8eac4182b9afa7a48c34fe2281abfba531] <==
	* {"level":"info","ts":"2023-09-19T17:36:05.476618Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:36:05.476696Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:36:05.476882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 switched to configuration voters=(7511473280440480035)"}
	{"level":"info","ts":"2023-09-19T17:36:05.476979Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","added-peer-id":"683e1d26ac7e3123","added-peer-peer-urls":["https://192.168.39.19:2380"]}
	{"level":"info","ts":"2023-09-19T17:36:05.477076Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f32d84448c0bab8","local-member-id":"683e1d26ac7e3123","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:36:05.477098Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:36:05.478434Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T17:36:05.482719Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"683e1d26ac7e3123","initial-advertise-peer-urls":["https://192.168.39.19:2380"],"listen-peer-urls":["https://192.168.39.19:2380"],"advertise-client-urls":["https://192.168.39.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T17:36:05.48278Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T17:36:05.482827Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:36:05.482834Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.19:2380"}
	{"level":"info","ts":"2023-09-19T17:36:06.42977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-19T17:36:06.429901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-19T17:36:06.42994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgPreVoteResp from 683e1d26ac7e3123 at term 3"}
	{"level":"info","ts":"2023-09-19T17:36:06.429971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became candidate at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.429996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 received MsgVoteResp from 683e1d26ac7e3123 at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.430023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"683e1d26ac7e3123 became leader at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.430048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 683e1d26ac7e3123 elected leader 683e1d26ac7e3123 at term 4"}
	{"level":"info","ts":"2023-09-19T17:36:06.435783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:36:06.43605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:36:06.43702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.19:2379"}
	{"level":"info","ts":"2023-09-19T17:36:06.435785Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"683e1d26ac7e3123","local-member-attributes":"{Name:pause-169801 ClientURLs:[https://192.168.39.19:2379]}","request-path":"/0/members/683e1d26ac7e3123/attributes","cluster-id":"3f32d84448c0bab8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:36:06.437447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:36:06.437487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:36:06.437872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:36:29 up 3 min,  0 users,  load average: 0.72, 0.42, 0.18
	Linux pause-169801 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [14546ad7fd1d985dc5ad077f4e1bcd138deddc37573db331dd96a17bfff051aa] <==
	* I0919 17:36:07.925803       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 17:36:07.925814       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0919 17:36:07.874207       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 17:36:08.002741       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 17:36:08.024797       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0919 17:36:08.024920       1 aggregator.go:166] initial CRD sync complete...
	I0919 17:36:08.024934       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 17:36:08.024940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 17:36:08.024946       1 cache.go:39] Caches are synced for autoregister controller
	I0919 17:36:08.050547       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0919 17:36:08.072944       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0919 17:36:08.073080       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:36:08.073494       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 17:36:08.073575       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 17:36:08.074558       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:36:08.075526       1 shared_informer.go:318] Caches are synced for configmaps
	E0919 17:36:08.087852       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 17:36:08.901818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 17:36:09.636911       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 17:36:09.649707       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 17:36:09.687944       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 17:36:09.718220       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:36:09.726272       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 17:36:20.454824       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 17:36:20.659908       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [ee3fa8fbdd88da19501e7cae4bc51ec86e94512a7f4dc3ebb076bf8c2f5f1c51] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 17:35:55.250773       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 17:35:55.255528       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 17:35:55.406242       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [4e4d161aaa6a5f859203f989bd350544bef21753e2d49dccd319499a1a47ebe7] <==
	* I0919 17:36:20.435434       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0919 17:36:20.435797       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-169801"
	I0919 17:36:20.435886       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0919 17:36:20.436029       1 event.go:307] "Event occurred" object="pause-169801" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-169801 event: Registered Node pause-169801 in Controller"
	I0919 17:36:20.437274       1 shared_informer.go:318] Caches are synced for ephemeral
	I0919 17:36:20.449931       1 shared_informer.go:318] Caches are synced for deployment
	I0919 17:36:20.453535       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0919 17:36:20.456225       1 shared_informer.go:318] Caches are synced for namespace
	I0919 17:36:20.461349       1 shared_informer.go:318] Caches are synced for daemon sets
	I0919 17:36:20.474521       1 shared_informer.go:318] Caches are synced for attach detach
	I0919 17:36:20.479277       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0919 17:36:20.484069       1 shared_informer.go:318] Caches are synced for endpoint
	I0919 17:36:20.485592       1 shared_informer.go:318] Caches are synced for stateful set
	I0919 17:36:20.488145       1 shared_informer.go:318] Caches are synced for GC
	I0919 17:36:20.488946       1 shared_informer.go:318] Caches are synced for cronjob
	I0919 17:36:20.503019       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0919 17:36:20.503110       1 shared_informer.go:318] Caches are synced for persistent volume
	I0919 17:36:20.521498       1 shared_informer.go:318] Caches are synced for crt configmap
	I0919 17:36:20.559152       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0919 17:36:20.569793       1 shared_informer.go:318] Caches are synced for resource quota
	I0919 17:36:20.582382       1 shared_informer.go:318] Caches are synced for HPA
	I0919 17:36:20.611885       1 shared_informer.go:318] Caches are synced for resource quota
	I0919 17:36:20.966114       1 shared_informer.go:318] Caches are synced for garbage collector
	I0919 17:36:20.966171       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0919 17:36:21.012100       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [b8526c6c5b0f74fb1f219ef5cc48c483abf59524bda9cc03ad269c056efd8d29] <==
	* I0919 17:35:37.173768       1 serving.go:348] Generated self-signed cert in-memory
	I0919 17:35:37.429556       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I0919 17:35:37.429620       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:35:37.431425       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 17:35:37.431742       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 17:35:37.432113       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0919 17:35:37.432284       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de] <==
	* I0919 17:35:20.719756       1 server_others.go:69] "Using iptables proxy"
	E0919 17:35:20.722920       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	E0919 17:35:21.779790       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	E0919 17:35:23.891120       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	E0919 17:35:28.688287       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-169801": dial tcp 192.168.39.19:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [d2cc21971a5fab69b9fa47214e68a83f7fa59b55cdc249f5b2345fdbb5a38145] <==
	* I0919 17:36:09.178200       1 server_others.go:69] "Using iptables proxy"
	I0919 17:36:09.191083       1 node.go:141] Successfully retrieved node IP: 192.168.39.19
	I0919 17:36:09.264777       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:36:09.264854       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:36:09.272768       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:36:09.272881       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:36:09.273120       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:36:09.273132       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:36:09.274244       1 config.go:188] "Starting service config controller"
	I0919 17:36:09.274311       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:36:09.274338       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:36:09.274342       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:36:09.276082       1 config.go:315] "Starting node config controller"
	I0919 17:36:09.276122       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:36:09.375012       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:36:09.375019       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:36:09.376497       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2d1a6e275bd3a9c89b69a8696ae4eaa5141c8a137866cad616a040747ff3f36d] <==
	* I0919 17:36:05.889524       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:36:07.938176       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:36:07.938231       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:36:07.938243       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:36:07.938249       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:36:07.995491       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:36:07.995539       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:36:07.997438       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 17:36:08.004933       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 17:36:08.005070       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 17:36:08.005195       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 17:36:08.106464       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [a893fa5bb14559a0a754d227663a956d8284b86da5c6b07da478d8c2d2448d3d] <==
	* I0919 17:35:36.017811       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:35:45.561114       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:35:45.561199       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:35:45.561214       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:35:45.561224       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:35:45.589286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:35:45.589376       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:35:45.591481       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	E0919 17:35:45.591597       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0919 17:35:45.591827       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:33:16 UTC, ends at Tue 2023-09-19 17:36:30 UTC. --
	Sep 19 17:36:03 pause-169801 kubelet[3818]: W0919 17:36:03.681075    3818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:03 pause-169801 kubelet[3818]: E0919 17:36:03.681139    3818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:03 pause-169801 kubelet[3818]: E0919 17:36:03.964415    3818 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-169801?timeout=10s\": dial tcp 192.168.39.19:8443: connect: connection refused" interval="1.6s"
	Sep 19 17:36:04 pause-169801 kubelet[3818]: I0919 17:36:04.070562    3818 kubelet_node_status.go:70] "Attempting to register node" node="pause-169801"
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.071043    3818 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.19:8443: connect: connection refused" node="pause-169801"
	Sep 19 17:36:04 pause-169801 kubelet[3818]: W0919 17:36:04.078166    3818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-169801&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.078224    3818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-169801&limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: W0919 17:36:04.095067    3818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.095117    3818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.19:8443: connect: connection refused
	Sep 19 17:36:04 pause-169801 kubelet[3818]: E0919 17:36:04.291784    3818 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pause-169801.17865d5cb636c29a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"pause-169801", UID:"pause-169801", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"pause-169801"}, FirstTimestamp:time.Date(2023, time.September, 19, 17, 36, 2, 526986906, time.Local), LastTimestamp:time.Dat
e(2023, time.September, 19, 17, 36, 2, 526986906, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"pause-169801"}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.39.19:8443: connect: connection refused'(may retry after sleeping)
	Sep 19 17:36:05 pause-169801 kubelet[3818]: I0919 17:36:05.672759    3818 kubelet_node_status.go:70] "Attempting to register node" node="pause-169801"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.089943    3818 kubelet_node_status.go:108] "Node was previously registered" node="pause-169801"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.090030    3818 kubelet_node_status.go:73] "Successfully registered node" node="pause-169801"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.092824    3818 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.093976    3818 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.528771    3818 apiserver.go:52] "Watching apiserver"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.532722    3818 topology_manager.go:215] "Topology Admit Handler" podUID="409754a7-e951-45ac-beea-ca183d856092" podNamespace="kube-system" podName="coredns-5dd5756b68-7sskx"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.532916    3818 topology_manager.go:215] "Topology Admit Handler" podUID="f22589b0-519f-4fa0-ba97-e13745761263" podNamespace="kube-system" podName="kube-proxy-758ss"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.553000    3818 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.583956    3818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f22589b0-519f-4fa0-ba97-e13745761263-xtables-lock\") pod \"kube-proxy-758ss\" (UID: \"f22589b0-519f-4fa0-ba97-e13745761263\") " pod="kube-system/kube-proxy-758ss"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.584041    3818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f22589b0-519f-4fa0-ba97-e13745761263-lib-modules\") pod \"kube-proxy-758ss\" (UID: \"f22589b0-519f-4fa0-ba97-e13745761263\") " pod="kube-system/kube-proxy-758ss"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.833978    3818 scope.go:117] "RemoveContainer" containerID="12b8ee660f025c534e19d3f910df734802df73228fea53061ba46393521342de"
	Sep 19 17:36:08 pause-169801 kubelet[3818]: I0919 17:36:08.834823    3818 scope.go:117] "RemoveContainer" containerID="66189fd3696e2379fd8aae3a882100fa6e7bfca543288c55f2ac9d24cc741538"
	Sep 19 17:36:10 pause-169801 kubelet[3818]: I0919 17:36:10.777170    3818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 19 17:36:12 pause-169801 kubelet[3818]: I0919 17:36:12.571368    3818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-169801 -n pause-169801
helpers_test.go:261: (dbg) Run:  kubectl --context pause-169801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (101.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (584.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-100627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-100627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: exit status 80 (9m42.680727906s)

                                                
                                                
-- stdout --
	* [old-k8s-version-100627] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-100627 in cluster old-k8s-version-100627
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:36:57.219771   42957 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:36:57.220061   42957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:57.220071   42957 out.go:309] Setting ErrFile to fd 2...
	I0919 17:36:57.220078   42957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:57.220266   42957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:36:57.220905   42957 out.go:303] Setting JSON to false
	I0919 17:36:57.222180   42957 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4767,"bootTime":1695140250,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:36:57.222304   42957 start.go:138] virtualization: kvm guest
	I0919 17:36:57.225042   42957 out.go:177] * [old-k8s-version-100627] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:36:57.227052   42957 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:36:57.227050   42957 notify.go:220] Checking for updates...
	I0919 17:36:57.228616   42957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:36:57.230078   42957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:36:57.231538   42957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:36:57.233022   42957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:36:57.234421   42957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:36:57.236213   42957 config.go:182] Loaded profile config "cert-expiration-142729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:57.236339   42957 config.go:182] Loaded profile config "cert-options-512928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:57.236458   42957 config.go:182] Loaded profile config "force-systemd-env-367630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:57.236571   42957 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:36:57.270414   42957 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 17:36:57.271869   42957 start.go:298] selected driver: kvm2
	I0919 17:36:57.271885   42957 start.go:902] validating driver "kvm2" against <nil>
	I0919 17:36:57.271895   42957 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:36:57.272595   42957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:36:57.272667   42957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:36:57.287444   42957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:36:57.287489   42957 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 17:36:57.287736   42957 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:36:57.287768   42957 cni.go:84] Creating CNI manager for ""
	I0919 17:36:57.287779   42957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:36:57.287788   42957 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 17:36:57.287797   42957 start_flags.go:321] config:
	{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:36:57.287926   42957 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:36:57.289844   42957 out.go:177] * Starting control plane node old-k8s-version-100627 in cluster old-k8s-version-100627
	I0919 17:36:57.291143   42957 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:36:57.291184   42957 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 17:36:57.291192   42957 cache.go:57] Caching tarball of preloaded images
	I0919 17:36:57.291279   42957 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:36:57.291292   42957 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0919 17:36:57.291410   42957 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:36:57.291433   42957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json: {Name:mk041e36985762937be200a927b28475db893d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:36:57.291562   42957 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:37:33.789228   42957 start.go:369] acquired machines lock for "old-k8s-version-100627" in 36.497587991s
	I0919 17:37:33.789292   42957 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:37:33.789415   42957 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 17:37:33.791688   42957 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 17:37:33.791908   42957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:37:33.791992   42957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:37:33.810866   42957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0919 17:37:33.811289   42957 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:37:33.811874   42957 main.go:141] libmachine: Using API Version  1
	I0919 17:37:33.811894   42957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:37:33.812250   42957 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:37:33.812497   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:37:33.812659   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:33.812843   42957 start.go:159] libmachine.API.Create for "old-k8s-version-100627" (driver="kvm2")
	I0919 17:37:33.812868   42957 client.go:168] LocalClient.Create starting
	I0919 17:37:33.812910   42957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 17:37:33.812954   42957 main.go:141] libmachine: Decoding PEM data...
	I0919 17:37:33.812978   42957 main.go:141] libmachine: Parsing certificate...
	I0919 17:37:33.813052   42957 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 17:37:33.813077   42957 main.go:141] libmachine: Decoding PEM data...
	I0919 17:37:33.813096   42957 main.go:141] libmachine: Parsing certificate...
	I0919 17:37:33.813121   42957 main.go:141] libmachine: Running pre-create checks...
	I0919 17:37:33.813135   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .PreCreateCheck
	I0919 17:37:33.813600   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:37:33.814018   42957 main.go:141] libmachine: Creating machine...
	I0919 17:37:33.814031   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .Create
	I0919 17:37:33.814173   42957 main.go:141] libmachine: (old-k8s-version-100627) Creating KVM machine...
	I0919 17:37:33.815228   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found existing default KVM network
	I0919 17:37:33.817478   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:33.817302   43487 network.go:212] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0919 17:37:33.818507   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:33.818411   43487 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:3a:0c} reservation:<nil>}
	I0919 17:37:33.819309   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:33.819229   43487 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:84:eb:fb} reservation:<nil>}
	I0919 17:37:33.820372   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:33.820292   43487 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a70}
	I0919 17:37:33.825991   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | trying to create private KVM network mk-old-k8s-version-100627 192.168.72.0/24...
	I0919 17:37:33.898879   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | private KVM network mk-old-k8s-version-100627 192.168.72.0/24 created
	I0919 17:37:33.898924   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627 ...
	I0919 17:37:33.898943   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:33.898859   43487 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:37:33.898971   42957 main.go:141] libmachine: (old-k8s-version-100627) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 17:37:33.898995   42957 main.go:141] libmachine: (old-k8s-version-100627) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 17:37:34.113314   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:34.113170   43487 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa...
	I0919 17:37:34.335920   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:34.335756   43487 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/old-k8s-version-100627.rawdisk...
	I0919 17:37:34.335974   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Writing magic tar header
	I0919 17:37:34.335995   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Writing SSH key tar header
	I0919 17:37:34.336013   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:34.335862   43487 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627 ...
	I0919 17:37:34.336039   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627 (perms=drwx------)
	I0919 17:37:34.336058   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627
	I0919 17:37:34.336076   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 17:37:34.336090   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:37:34.336105   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 17:37:34.336120   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 17:37:34.336132   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 17:37:34.336149   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home/jenkins
	I0919 17:37:34.336170   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 17:37:34.336190   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Checking permissions on dir: /home
	I0919 17:37:34.336204   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Skipping /home - not owner
	I0919 17:37:34.336218   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 17:37:34.336230   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 17:37:34.336289   42957 main.go:141] libmachine: (old-k8s-version-100627) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 17:37:34.336322   42957 main.go:141] libmachine: (old-k8s-version-100627) Creating domain...
	I0919 17:37:34.337277   42957 main.go:141] libmachine: (old-k8s-version-100627) define libvirt domain using xml: 
	I0919 17:37:34.337303   42957 main.go:141] libmachine: (old-k8s-version-100627) <domain type='kvm'>
	I0919 17:37:34.337315   42957 main.go:141] libmachine: (old-k8s-version-100627)   <name>old-k8s-version-100627</name>
	I0919 17:37:34.337326   42957 main.go:141] libmachine: (old-k8s-version-100627)   <memory unit='MiB'>2200</memory>
	I0919 17:37:34.337343   42957 main.go:141] libmachine: (old-k8s-version-100627)   <vcpu>2</vcpu>
	I0919 17:37:34.337357   42957 main.go:141] libmachine: (old-k8s-version-100627)   <features>
	I0919 17:37:34.337368   42957 main.go:141] libmachine: (old-k8s-version-100627)     <acpi/>
	I0919 17:37:34.337384   42957 main.go:141] libmachine: (old-k8s-version-100627)     <apic/>
	I0919 17:37:34.337393   42957 main.go:141] libmachine: (old-k8s-version-100627)     <pae/>
	I0919 17:37:34.337398   42957 main.go:141] libmachine: (old-k8s-version-100627)     
	I0919 17:37:34.337407   42957 main.go:141] libmachine: (old-k8s-version-100627)   </features>
	I0919 17:37:34.337413   42957 main.go:141] libmachine: (old-k8s-version-100627)   <cpu mode='host-passthrough'>
	I0919 17:37:34.337428   42957 main.go:141] libmachine: (old-k8s-version-100627)   
	I0919 17:37:34.337438   42957 main.go:141] libmachine: (old-k8s-version-100627)   </cpu>
	I0919 17:37:34.337448   42957 main.go:141] libmachine: (old-k8s-version-100627)   <os>
	I0919 17:37:34.337462   42957 main.go:141] libmachine: (old-k8s-version-100627)     <type>hvm</type>
	I0919 17:37:34.337478   42957 main.go:141] libmachine: (old-k8s-version-100627)     <boot dev='cdrom'/>
	I0919 17:37:34.337485   42957 main.go:141] libmachine: (old-k8s-version-100627)     <boot dev='hd'/>
	I0919 17:37:34.337492   42957 main.go:141] libmachine: (old-k8s-version-100627)     <bootmenu enable='no'/>
	I0919 17:37:34.337497   42957 main.go:141] libmachine: (old-k8s-version-100627)   </os>
	I0919 17:37:34.337503   42957 main.go:141] libmachine: (old-k8s-version-100627)   <devices>
	I0919 17:37:34.337511   42957 main.go:141] libmachine: (old-k8s-version-100627)     <disk type='file' device='cdrom'>
	I0919 17:37:34.337530   42957 main.go:141] libmachine: (old-k8s-version-100627)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/boot2docker.iso'/>
	I0919 17:37:34.337555   42957 main.go:141] libmachine: (old-k8s-version-100627)       <target dev='hdc' bus='scsi'/>
	I0919 17:37:34.337569   42957 main.go:141] libmachine: (old-k8s-version-100627)       <readonly/>
	I0919 17:37:34.337581   42957 main.go:141] libmachine: (old-k8s-version-100627)     </disk>
	I0919 17:37:34.337592   42957 main.go:141] libmachine: (old-k8s-version-100627)     <disk type='file' device='disk'>
	I0919 17:37:34.337602   42957 main.go:141] libmachine: (old-k8s-version-100627)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 17:37:34.337618   42957 main.go:141] libmachine: (old-k8s-version-100627)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/old-k8s-version-100627.rawdisk'/>
	I0919 17:37:34.337637   42957 main.go:141] libmachine: (old-k8s-version-100627)       <target dev='hda' bus='virtio'/>
	I0919 17:37:34.337651   42957 main.go:141] libmachine: (old-k8s-version-100627)     </disk>
	I0919 17:37:34.337664   42957 main.go:141] libmachine: (old-k8s-version-100627)     <interface type='network'>
	I0919 17:37:34.337679   42957 main.go:141] libmachine: (old-k8s-version-100627)       <source network='mk-old-k8s-version-100627'/>
	I0919 17:37:34.337690   42957 main.go:141] libmachine: (old-k8s-version-100627)       <model type='virtio'/>
	I0919 17:37:34.337699   42957 main.go:141] libmachine: (old-k8s-version-100627)     </interface>
	I0919 17:37:34.337714   42957 main.go:141] libmachine: (old-k8s-version-100627)     <interface type='network'>
	I0919 17:37:34.337729   42957 main.go:141] libmachine: (old-k8s-version-100627)       <source network='default'/>
	I0919 17:37:34.337742   42957 main.go:141] libmachine: (old-k8s-version-100627)       <model type='virtio'/>
	I0919 17:37:34.337755   42957 main.go:141] libmachine: (old-k8s-version-100627)     </interface>
	I0919 17:37:34.337768   42957 main.go:141] libmachine: (old-k8s-version-100627)     <serial type='pty'>
	I0919 17:37:34.337812   42957 main.go:141] libmachine: (old-k8s-version-100627)       <target port='0'/>
	I0919 17:37:34.337840   42957 main.go:141] libmachine: (old-k8s-version-100627)     </serial>
	I0919 17:37:34.337853   42957 main.go:141] libmachine: (old-k8s-version-100627)     <console type='pty'>
	I0919 17:37:34.337867   42957 main.go:141] libmachine: (old-k8s-version-100627)       <target type='serial' port='0'/>
	I0919 17:37:34.337881   42957 main.go:141] libmachine: (old-k8s-version-100627)     </console>
	I0919 17:37:34.337893   42957 main.go:141] libmachine: (old-k8s-version-100627)     <rng model='virtio'>
	I0919 17:37:34.337911   42957 main.go:141] libmachine: (old-k8s-version-100627)       <backend model='random'>/dev/random</backend>
	I0919 17:37:34.337927   42957 main.go:141] libmachine: (old-k8s-version-100627)     </rng>
	I0919 17:37:34.337945   42957 main.go:141] libmachine: (old-k8s-version-100627)     
	I0919 17:37:34.337957   42957 main.go:141] libmachine: (old-k8s-version-100627)     
	I0919 17:37:34.337967   42957 main.go:141] libmachine: (old-k8s-version-100627)   </devices>
	I0919 17:37:34.337979   42957 main.go:141] libmachine: (old-k8s-version-100627) </domain>
	I0919 17:37:34.338009   42957 main.go:141] libmachine: (old-k8s-version-100627) 
	I0919 17:37:34.341988   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:59:9e:6d in network default
	I0919 17:37:34.342499   42957 main.go:141] libmachine: (old-k8s-version-100627) Ensuring networks are active...
	I0919 17:37:34.342518   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:34.343248   42957 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network default is active
	I0919 17:37:34.343601   42957 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network mk-old-k8s-version-100627 is active
	I0919 17:37:34.344172   42957 main.go:141] libmachine: (old-k8s-version-100627) Getting domain xml...
	I0919 17:37:34.344858   42957 main.go:141] libmachine: (old-k8s-version-100627) Creating domain...
	I0919 17:37:35.707310   42957 main.go:141] libmachine: (old-k8s-version-100627) Waiting to get IP...
	I0919 17:37:35.708263   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:35.708897   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:35.708926   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:35.708882   43487 retry.go:31] will retry after 245.540083ms: waiting for machine to come up
	I0919 17:37:35.956430   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:35.956940   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:35.956967   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:35.956894   43487 retry.go:31] will retry after 263.282783ms: waiting for machine to come up
	I0919 17:37:36.221417   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:36.221954   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:36.221983   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:36.221904   43487 retry.go:31] will retry after 296.394237ms: waiting for machine to come up
	I0919 17:37:36.520488   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:36.521055   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:36.521102   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:36.521002   43487 retry.go:31] will retry after 578.041635ms: waiting for machine to come up
	I0919 17:37:37.100771   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:37.101345   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:37.101376   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:37.101291   43487 retry.go:31] will retry after 724.068331ms: waiting for machine to come up
	I0919 17:37:37.827288   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:37.827807   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:37.827835   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:37.827750   43487 retry.go:31] will retry after 611.17761ms: waiting for machine to come up
	I0919 17:37:38.440367   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:38.440939   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:38.440975   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:38.440847   43487 retry.go:31] will retry after 1.07704454s: waiting for machine to come up
	I0919 17:37:39.519591   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:39.520252   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:39.520285   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:39.520208   43487 retry.go:31] will retry after 1.437221515s: waiting for machine to come up
	I0919 17:37:40.959238   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:40.959748   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:40.959773   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:40.959694   43487 retry.go:31] will retry after 1.375412035s: waiting for machine to come up
	I0919 17:37:42.337214   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:42.337643   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:42.337674   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:42.337597   43487 retry.go:31] will retry after 1.98678451s: waiting for machine to come up
	I0919 17:37:44.325684   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:44.326182   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:44.326212   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:44.326133   43487 retry.go:31] will retry after 2.128471411s: waiting for machine to come up
	I0919 17:37:46.456445   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:46.457009   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:46.457053   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:46.456949   43487 retry.go:31] will retry after 3.325529214s: waiting for machine to come up
	I0919 17:37:49.784592   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:49.785174   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:49.785204   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:49.785121   43487 retry.go:31] will retry after 4.044126321s: waiting for machine to come up
	I0919 17:37:53.830547   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:53.831117   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:37:53.831148   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:37:53.831082   43487 retry.go:31] will retry after 3.657867446s: waiting for machine to come up
	I0919 17:37:57.490041   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.490659   42957 main.go:141] libmachine: (old-k8s-version-100627) Found IP for machine: 192.168.72.182
	I0919 17:37:57.490691   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has current primary IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.490701   42957 main.go:141] libmachine: (old-k8s-version-100627) Reserving static IP address...
	I0919 17:37:57.491109   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"} in network mk-old-k8s-version-100627
	I0919 17:37:57.563055   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Getting to WaitForSSH function...
	I0919 17:37:57.563091   42957 main.go:141] libmachine: (old-k8s-version-100627) Reserved static IP address: 192.168.72.182
	I0919 17:37:57.563118   42957 main.go:141] libmachine: (old-k8s-version-100627) Waiting for SSH to be available...
	I0919 17:37:57.565838   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.566254   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:57.566287   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.566464   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH client type: external
	I0919 17:37:57.566498   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa (-rw-------)
	I0919 17:37:57.566568   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:37:57.566601   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | About to run SSH command:
	I0919 17:37:57.566621   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | exit 0
	I0919 17:37:57.652015   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | SSH cmd err, output: <nil>: 
	I0919 17:37:57.652318   42957 main.go:141] libmachine: (old-k8s-version-100627) KVM machine creation complete!
	I0919 17:37:57.652629   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:37:57.653502   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:57.654404   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:57.654584   42957 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 17:37:57.654601   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:37:57.655892   42957 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 17:37:57.655910   42957 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 17:37:57.655919   42957 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 17:37:57.655930   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:57.658119   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.658433   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:57.658469   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.658556   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:57.658729   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:57.658866   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:57.659004   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:57.659145   42957 main.go:141] libmachine: Using SSH client type: native
	I0919 17:37:57.659519   42957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:37:57.659531   42957 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 17:37:57.771343   42957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:37:57.771366   42957 main.go:141] libmachine: Detecting the provisioner...
	I0919 17:37:57.771377   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:57.773816   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.774137   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:57.774170   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.774350   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:57.774539   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:57.774659   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:57.774776   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:57.774915   42957 main.go:141] libmachine: Using SSH client type: native
	I0919 17:37:57.775211   42957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:37:57.775223   42957 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 17:37:57.889740   42957 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 17:37:57.889822   42957 main.go:141] libmachine: found compatible host: buildroot
	I0919 17:37:57.889840   42957 main.go:141] libmachine: Provisioning with buildroot...
	I0919 17:37:57.889858   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:37:57.890107   42957 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:37:57.890132   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:37:57.890272   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:57.892469   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.892922   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:57.892961   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:57.893078   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:57.893254   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:57.893382   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:57.893556   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:57.893703   42957 main.go:141] libmachine: Using SSH client type: native
	I0919 17:37:57.894015   42957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:37:57.894028   42957 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:37:58.024753   42957 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100627
	
	I0919 17:37:58.024787   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:58.027439   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.027750   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.027797   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.027982   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:58.028168   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.028342   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.028544   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:58.028715   42957 main.go:141] libmachine: Using SSH client type: native
	I0919 17:37:58.029043   42957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:37:58.029062   42957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-100627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-100627/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-100627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:37:58.149248   42957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:37:58.149277   42957 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:37:58.149334   42957 buildroot.go:174] setting up certificates
	I0919 17:37:58.149359   42957 provision.go:83] configureAuth start
	I0919 17:37:58.149379   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:37:58.149677   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:37:58.152062   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.152424   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.152471   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.152547   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:58.154716   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.155013   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.155045   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.155236   42957 provision.go:138] copyHostCerts
	I0919 17:37:58.155294   42957 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:37:58.155307   42957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:37:58.155374   42957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:37:58.155486   42957 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:37:58.155497   42957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:37:58.155532   42957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:37:58.155604   42957 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:37:58.155614   42957 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:37:58.155649   42957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:37:58.155721   42957 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-100627 san=[192.168.72.182 192.168.72.182 localhost 127.0.0.1 minikube old-k8s-version-100627]
	I0919 17:37:58.326221   42957 provision.go:172] copyRemoteCerts
	I0919 17:37:58.326282   42957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:37:58.326310   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:58.329020   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.329407   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.329455   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.329620   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:58.329770   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.329912   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:58.330060   42957 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:37:58.413414   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:37:58.436747   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:37:58.457845   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 17:37:58.480645   42957 provision.go:86] duration metric: configureAuth took 331.269887ms
	I0919 17:37:58.480670   42957 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:37:58.480817   42957 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:37:58.480879   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:58.483326   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.483706   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.483737   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.483980   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:58.484170   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.484330   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.484499   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:58.484660   42957 main.go:141] libmachine: Using SSH client type: native
	I0919 17:37:58.484998   42957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:37:58.485026   42957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:37:58.785646   42957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:37:58.785669   42957 main.go:141] libmachine: Checking connection to Docker...
	I0919 17:37:58.785680   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetURL
	I0919 17:37:58.786991   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using libvirt version 6000000
	I0919 17:37:58.789153   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.789531   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.789566   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.789684   42957 main.go:141] libmachine: Docker is up and running!
	I0919 17:37:58.789697   42957 main.go:141] libmachine: Reticulating splines...
	I0919 17:37:58.789703   42957 client.go:171] LocalClient.Create took 24.976828179s
	I0919 17:37:58.789725   42957 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-100627" took 24.976884689s
	I0919 17:37:58.789734   42957 start.go:300] post-start starting for "old-k8s-version-100627" (driver="kvm2")
	I0919 17:37:58.789745   42957 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:37:58.789761   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:58.789979   42957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:37:58.790008   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:58.791995   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.792365   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.792396   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.792572   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:58.792749   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.792913   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:58.793091   42957 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:37:58.878233   42957 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:37:58.882435   42957 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:37:58.882455   42957 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:37:58.882524   42957 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:37:58.882615   42957 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:37:58.882743   42957 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:37:58.891881   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:37:58.913648   42957 start.go:303] post-start completed in 123.897754ms
	I0919 17:37:58.913702   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:37:58.914256   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:37:58.916548   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.916866   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.916893   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.917107   42957 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:37:58.917325   42957 start.go:128] duration metric: createHost completed in 25.127900566s
	I0919 17:37:58.917354   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:58.919385   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.919788   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:58.919822   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:58.919901   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:58.920059   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.920189   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:58.920312   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:58.920533   42957 main.go:141] libmachine: Using SSH client type: native
	I0919 17:37:58.920882   42957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:37:58.920896   42957 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 17:37:59.037108   42957 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695145079.015035033
	
	I0919 17:37:59.037132   42957 fix.go:206] guest clock: 1695145079.015035033
	I0919 17:37:59.037140   42957 fix.go:219] Guest: 2023-09-19 17:37:59.015035033 +0000 UTC Remote: 2023-09-19 17:37:58.917339103 +0000 UTC m=+61.726190864 (delta=97.69593ms)
	I0919 17:37:59.037179   42957 fix.go:190] guest clock delta is within tolerance: 97.69593ms
	I0919 17:37:59.037190   42957 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 25.247926167s
	I0919 17:37:59.037224   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:59.037476   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:37:59.040356   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:59.040723   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:59.040755   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:59.040945   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:59.041505   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:59.041719   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:37:59.041798   42957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:37:59.041838   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:59.041959   42957 ssh_runner.go:195] Run: cat /version.json
	I0919 17:37:59.041986   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:37:59.044523   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:59.044739   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:59.044920   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:59.044953   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:59.045107   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:37:59.045135   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:37:59.045177   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:59.045288   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:37:59.045387   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:59.045461   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:37:59.045533   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:59.045582   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:37:59.045663   42957 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:37:59.045694   42957 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:37:59.130152   42957 ssh_runner.go:195] Run: systemctl --version
	I0919 17:37:59.153293   42957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:37:59.320811   42957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:37:59.326633   42957 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:37:59.326720   42957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:37:59.341233   42957 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:37:59.341258   42957 start.go:469] detecting cgroup driver to use...
	I0919 17:37:59.341389   42957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:37:59.357647   42957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:37:59.369217   42957 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:37:59.369289   42957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:37:59.381024   42957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:37:59.392585   42957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:37:59.504955   42957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:37:59.625913   42957 docker.go:212] disabling docker service ...
	I0919 17:37:59.626000   42957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:37:59.640356   42957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:37:59.654344   42957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:37:59.759068   42957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:37:59.863020   42957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:37:59.876481   42957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:37:59.894466   42957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:37:59.894516   42957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:37:59.903759   42957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:37:59.903821   42957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:37:59.913168   42957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:37:59.922427   42957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:37:59.931438   42957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:37:59.940962   42957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:37:59.949045   42957 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:37:59.949101   42957 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:37:59.960905   42957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:37:59.971864   42957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:38:00.100650   42957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:38:00.285817   42957 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:38:00.285886   42957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:38:00.291442   42957 start.go:537] Will wait 60s for crictl version
	I0919 17:38:00.291503   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:00.295516   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:38:00.335107   42957 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:38:00.335190   42957 ssh_runner.go:195] Run: crio --version
	I0919 17:38:00.389127   42957 ssh_runner.go:195] Run: crio --version
	I0919 17:38:00.440774   42957 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0919 17:38:00.442086   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:38:00.445069   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:38:00.445597   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:38:00.445631   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:38:00.445866   42957 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0919 17:38:00.450232   42957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:38:00.463185   42957 localpath.go:92] copying /home/jenkins/minikube-integration/17240-6042/.minikube/client.crt -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt
	I0919 17:38:00.463365   42957 localpath.go:117] copying /home/jenkins/minikube-integration/17240-6042/.minikube/client.key -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.key
	I0919 17:38:00.463473   42957 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:38:00.463513   42957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:38:00.500932   42957 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:38:00.500990   42957 ssh_runner.go:195] Run: which lz4
	I0919 17:38:00.505211   42957 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 17:38:00.509548   42957 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:38:00.509582   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0919 17:38:02.343332   42957 crio.go:444] Took 1.838167 seconds to copy over tarball
	I0919 17:38:02.343403   42957 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:38:05.361006   42957 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.017535731s)
	I0919 17:38:05.361047   42957 crio.go:451] Took 3.017694 seconds to extract the tarball
	I0919 17:38:05.361062   42957 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:38:05.400884   42957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:38:05.459846   42957 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:38:05.459874   42957 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:38:05.459923   42957 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:38:05.459955   42957 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:38:05.459992   42957 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:38:05.460015   42957 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0919 17:38:05.460035   42957 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:38:05.460176   42957 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:38:05.460191   42957 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:38:05.460197   42957 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:38:05.463105   42957 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:38:05.463151   42957 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:38:05.463202   42957 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:38:05.463251   42957 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0919 17:38:05.463373   42957 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:38:05.463394   42957 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:38:05.463411   42957 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:38:05.463450   42957 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:38:05.610326   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0919 17:38:05.612388   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:38:05.613954   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0919 17:38:05.614287   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:38:05.615578   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0919 17:38:05.616847   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:38:05.624726   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:38:05.755265   42957 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0919 17:38:05.755315   42957 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:38:05.755364   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.804149   42957 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0919 17:38:05.804198   42957 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0919 17:38:05.804205   42957 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:38:05.804230   42957 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0919 17:38:05.804253   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.804259   42957 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:38:05.804273   42957 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0919 17:38:05.804318   42957 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0919 17:38:05.804330   42957 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0919 17:38:05.804304   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.804350   42957 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:38:05.804364   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.804233   42957 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0919 17:38:05.804390   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.804413   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.805098   42957 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0919 17:38:05.805144   42957 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:38:05.805186   42957 ssh_runner.go:195] Run: which crictl
	I0919 17:38:05.817927   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0919 17:38:05.817955   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0919 17:38:05.817959   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0919 17:38:05.817984   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:38:05.818036   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:38:05.818054   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:38:05.818102   42957 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:38:05.986109   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0919 17:38:05.986159   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0919 17:38:05.986219   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0919 17:38:05.986256   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0919 17:38:05.986334   42957 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0919 17:38:05.986366   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0919 17:38:05.986367   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0919 17:38:05.986412   42957 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0919 17:38:05.990889   42957 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0919 17:38:05.990922   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0919 17:38:06.016879   42957 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0919 17:38:06.016949   42957 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0919 17:38:06.433305   42957 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:38:09.079028   42957 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (3.062053589s)
	I0919 17:38:09.079105   42957 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0919 17:38:09.079059   42957 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.645718484s)
	I0919 17:38:09.079177   42957 cache_images.go:92] LoadImages completed in 3.619287599s
	W0919 17:38:09.079255   42957 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0919 17:38:09.079334   42957 ssh_runner.go:195] Run: crio config
	I0919 17:38:09.151832   42957 cni.go:84] Creating CNI manager for ""
	I0919 17:38:09.151906   42957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:38:09.151933   42957 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:38:09.151958   42957 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100627 NodeName:old-k8s-version-100627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 17:38:09.152151   42957 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-100627"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-100627
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.182:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:38:09.152258   42957 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-100627 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:38:09.152324   42957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0919 17:38:09.162464   42957 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:38:09.162589   42957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:38:09.171249   42957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0919 17:38:09.187762   42957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:38:09.204089   42957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0919 17:38:09.221779   42957 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0919 17:38:09.226318   42957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:38:09.239575   42957 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627 for IP: 192.168.72.182
	I0919 17:38:09.239611   42957 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:09.239796   42957 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:38:09.239856   42957 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:38:09.239968   42957 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.key
	I0919 17:38:09.240005   42957 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032
	I0919 17:38:09.240027   42957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt.3425b032 with IP's: [192.168.72.182 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 17:38:09.513416   42957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt.3425b032 ...
	I0919 17:38:09.513453   42957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt.3425b032: {Name:mk841ec0f5b4eaa54c1456a6b41ebbc471464400 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:09.513621   42957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032 ...
	I0919 17:38:09.513632   42957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032: {Name:mkc92cce2a4435296817bf3053c5aea4cc082978 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:09.513694   42957 certs.go:337] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt.3425b032 -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt
	I0919 17:38:09.513749   42957 certs.go:341] copying /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032 -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key
	I0919 17:38:09.513797   42957 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key
	I0919 17:38:09.513810   42957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt with IP's: []
	I0919 17:38:09.734808   42957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt ...
	I0919 17:38:09.734836   42957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt: {Name:mk76275133bcab6b3e19bd066d45d4e262de636e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:09.734990   42957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key ...
	I0919 17:38:09.735001   42957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key: {Name:mkf0faaa5a317b5044a3bc06ddf8652b85acea80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:09.735157   42957 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:38:09.735192   42957 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:38:09.735202   42957 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:38:09.735222   42957 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:38:09.735246   42957 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:38:09.735276   42957 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:38:09.735328   42957 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:38:09.735890   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:38:09.761507   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:38:09.783974   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:38:09.813971   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:38:09.843586   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:38:09.875518   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:38:09.903182   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:38:09.930060   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:38:09.956746   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:38:09.981651   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:38:10.006022   42957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:38:10.033831   42957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:38:10.054101   42957 ssh_runner.go:195] Run: openssl version
	I0919 17:38:10.061358   42957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:38:10.074940   42957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:38:10.080941   42957 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:38:10.081022   42957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:38:10.088347   42957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:38:10.101322   42957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:38:10.114258   42957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:38:10.120390   42957 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:38:10.120465   42957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:38:10.127436   42957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:38:10.140898   42957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:38:10.150324   42957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:38:10.155088   42957 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:38:10.155129   42957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:38:10.160750   42957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:38:10.170308   42957 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:38:10.174420   42957 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:38:10.174475   42957 kubeadm.go:404] StartCluster: {Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:38:10.174546   42957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:38:10.174581   42957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:38:10.211686   42957 cri.go:89] found id: ""
	I0919 17:38:10.211759   42957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:38:10.223219   42957 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:38:10.232247   42957 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:38:10.244325   42957 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:38:10.244376   42957 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 17:38:10.386435   42957 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0919 17:38:10.386501   42957 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:38:10.668006   42957 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:38:10.668211   42957 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:38:10.668351   42957 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:38:10.909540   42957 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:38:10.911184   42957 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:38:10.919504   42957 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0919 17:38:11.055803   42957 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:38:11.059075   42957 out.go:204]   - Generating certificates and keys ...
	I0919 17:38:11.059246   42957 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:38:11.059326   42957 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:38:11.256860   42957 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 17:38:11.473259   42957 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 17:38:11.656782   42957 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 17:38:11.778723   42957 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 17:38:12.026611   42957 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 17:38:12.026831   42957 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-100627 localhost] and IPs [192.168.72.182 127.0.0.1 ::1]
	I0919 17:38:12.110073   42957 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 17:38:12.110282   42957 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-100627 localhost] and IPs [192.168.72.182 127.0.0.1 ::1]
	I0919 17:38:12.207298   42957 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 17:38:12.572237   42957 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 17:38:12.798761   42957 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 17:38:12.798896   42957 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:38:13.067583   42957 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:38:13.212141   42957 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:38:13.358029   42957 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:38:13.653617   42957 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:38:13.654769   42957 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:38:13.656532   42957 out.go:204]   - Booting up control plane ...
	I0919 17:38:13.656657   42957 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:38:13.668462   42957 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:38:13.675952   42957 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:38:13.678667   42957 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:38:13.690291   42957 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:38:23.193800   42957 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.504744 seconds
	I0919 17:38:23.194013   42957 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:38:23.211500   42957 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:38:23.745389   42957 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:38:23.745650   42957 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-100627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 17:38:24.260060   42957 kubeadm.go:322] [bootstrap-token] Using token: m3t4au.778p2c7ghycni65s
	I0919 17:38:24.261631   42957 out.go:204]   - Configuring RBAC rules ...
	I0919 17:38:24.261801   42957 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:38:24.269374   42957 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:38:24.273369   42957 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:38:24.276833   42957 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:38:24.279658   42957 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:38:24.361807   42957 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:38:24.686749   42957 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:38:24.688005   42957 kubeadm.go:322] 
	I0919 17:38:24.688101   42957 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:38:24.688132   42957 kubeadm.go:322] 
	I0919 17:38:24.688240   42957 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:38:24.688250   42957 kubeadm.go:322] 
	I0919 17:38:24.688286   42957 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:38:24.688362   42957 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:38:24.688454   42957 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:38:24.688468   42957 kubeadm.go:322] 
	I0919 17:38:24.688533   42957 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:38:24.688636   42957 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:38:24.688733   42957 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:38:24.688742   42957 kubeadm.go:322] 
	I0919 17:38:24.688840   42957 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0919 17:38:24.688954   42957 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:38:24.688965   42957 kubeadm.go:322] 
	I0919 17:38:24.689092   42957 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m3t4au.778p2c7ghycni65s \
	I0919 17:38:24.689224   42957 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:38:24.689257   42957 kubeadm.go:322]     --control-plane 	  
	I0919 17:38:24.689262   42957 kubeadm.go:322] 
	I0919 17:38:24.689355   42957 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:38:24.689360   42957 kubeadm.go:322] 
	I0919 17:38:24.689450   42957 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m3t4au.778p2c7ghycni65s \
	I0919 17:38:24.689569   42957 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:38:24.690326   42957 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:38:24.690360   42957 cni.go:84] Creating CNI manager for ""
	I0919 17:38:24.690380   42957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:38:24.692934   42957 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:38:24.694506   42957 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:38:24.712985   42957 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:38:24.734854   42957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:38:24.734925   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:24.735002   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=old-k8s-version-100627 minikube.k8s.io/updated_at=2023_09_19T17_38_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:25.011018   42957 ops.go:34] apiserver oom_adj: -16
	I0919 17:38:25.011113   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:25.157574   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:25.790678   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:26.290285   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:26.790721   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:27.290999   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:27.790093   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:28.290887   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:28.790132   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:29.290734   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:29.790430   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:30.290151   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:30.790918   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:31.290201   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:31.790600   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:32.290088   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:32.790910   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:33.290967   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:33.791001   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:34.290050   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:34.790504   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:35.290435   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:35.791036   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:36.290557   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:36.790461   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:37.290588   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:37.791049   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:38.290552   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:38.790117   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:39.290463   42957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:38:39.413074   42957 kubeadm.go:1081] duration metric: took 14.678207906s to wait for elevateKubeSystemPrivileges.
	I0919 17:38:39.413105   42957 kubeadm.go:406] StartCluster complete in 29.238635222s
	I0919 17:38:39.413121   42957 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:39.413201   42957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:38:39.414194   42957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:38:39.414419   42957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:38:39.414520   42957 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:38:39.414613   42957 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100627"
	I0919 17:38:39.414619   42957 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100627"
	I0919 17:38:39.414634   42957 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-100627"
	I0919 17:38:39.414637   42957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100627"
	I0919 17:38:39.414666   42957 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:38:39.414679   42957 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 17:38:39.415036   42957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:38:39.415070   42957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:38:39.415117   42957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:38:39.415147   42957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:38:39.431467   42957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0919 17:38:39.431507   42957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0919 17:38:39.432023   42957 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:38:39.432133   42957 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:38:39.432501   42957 main.go:141] libmachine: Using API Version  1
	I0919 17:38:39.432523   42957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:38:39.432636   42957 main.go:141] libmachine: Using API Version  1
	I0919 17:38:39.432648   42957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:38:39.432874   42957 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:38:39.432959   42957 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:38:39.433027   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:38:39.433536   42957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:38:39.433568   42957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:38:39.447920   42957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42977
	I0919 17:38:39.448360   42957 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:38:39.448770   42957 main.go:141] libmachine: Using API Version  1
	I0919 17:38:39.448792   42957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:38:39.449154   42957 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:38:39.449413   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:38:39.451111   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:38:39.453051   42957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:38:39.452322   42957 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-100627"
	I0919 17:38:39.454838   42957 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:38:39.454855   42957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:38:39.454876   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:38:39.454876   42957 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 17:38:39.455325   42957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:38:39.455363   42957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:38:39.458035   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:38:39.458454   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:38:39.458484   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:38:39.458711   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:38:39.458869   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:38:39.459011   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:38:39.459132   42957 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:38:39.471153   42957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0919 17:38:39.471619   42957 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:38:39.472141   42957 main.go:141] libmachine: Using API Version  1
	I0919 17:38:39.472164   42957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:38:39.472553   42957 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:38:39.473123   42957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:38:39.473164   42957 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0919 17:38:39.477276   42957 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-100627" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0919 17:38:39.477303   42957 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0919 17:38:39.477326   42957 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:38:39.478804   42957 out.go:177] * Verifying Kubernetes components...
	I0919 17:38:39.480302   42957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:38:39.487424   42957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0919 17:38:39.487832   42957 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:38:39.488209   42957 main.go:141] libmachine: Using API Version  1
	I0919 17:38:39.488223   42957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:38:39.488588   42957 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:38:39.488822   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:38:39.490381   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:38:39.490720   42957 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:38:39.490735   42957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:38:39.490754   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:38:39.493468   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:38:39.493852   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:38:39.493869   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:38:39.494045   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:38:39.494636   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:38:39.494816   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:38:39.494967   42957 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:38:39.741329   42957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:38:39.763969   42957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:38:39.765210   42957 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100627" to be "Ready" ...
	I0919 17:38:39.773614   42957 node_ready.go:49] node "old-k8s-version-100627" has status "Ready":"True"
	I0919 17:38:39.773631   42957 node_ready.go:38] duration metric: took 8.393455ms waiting for node "old-k8s-version-100627" to be "Ready" ...
	I0919 17:38:39.773640   42957 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:38:39.782447   42957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:38:39.856170   42957 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:38:40.992036   42957 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.228023762s)
	I0919 17:38:40.992080   42957 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0919 17:38:40.992182   42957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.209705689s)
	I0919 17:38:40.992205   42957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.250830159s)
	I0919 17:38:40.992293   42957 main.go:141] libmachine: Making call to close driver server
	I0919 17:38:40.992314   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 17:38:40.992302   42957 main.go:141] libmachine: Making call to close driver server
	I0919 17:38:40.992351   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 17:38:40.992780   42957 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:38:40.992797   42957 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:38:40.992807   42957 main.go:141] libmachine: Making call to close driver server
	I0919 17:38:40.992815   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 17:38:40.992894   42957 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:38:40.992921   42957 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:38:40.992932   42957 main.go:141] libmachine: Making call to close driver server
	I0919 17:38:40.992941   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 17:38:40.993334   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 17:38:40.993388   42957 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:38:40.993409   42957 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:38:40.993432   42957 main.go:141] libmachine: Making call to close driver server
	I0919 17:38:40.993442   42957 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 17:38:40.993703   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 17:38:40.993748   42957 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:38:40.993765   42957 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:38:40.994378   42957 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 17:38:40.994380   42957 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:38:40.994395   42957 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:38:40.996603   42957 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0919 17:38:40.998464   42957 addons.go:502] enable addons completed in 1.583954285s: enabled=[default-storageclass storage-provisioner]
	I0919 17:38:41.923945   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:44.424320   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:46.925846   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:49.423753   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:51.425573   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:53.926839   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:56.423833   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:38:58.925038   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:01.430112   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:03.950388   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:06.424479   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:08.923940   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:11.421192   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:13.422862   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:15.422923   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:17.424193   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:19.922650   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:21.923279   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:23.924166   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:26.423320   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:28.922964   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:31.420958   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:33.428292   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:35.922464   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:38.424092   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:40.922909   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:43.421968   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:45.422192   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:47.422261   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:49.921704   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:52.422677   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:54.423692   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:56.922166   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:39:58.923769   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:00.924303   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:03.423348   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:05.922232   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:07.923635   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:10.422513   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:12.424976   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:14.923211   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:17.422498   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:19.422782   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:21.920894   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:23.922261   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:26.421761   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:28.923660   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:31.421074   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:33.423859   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:35.923692   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:38.423144   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:40.923016   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:43.426187   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:45.921546   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:47.922751   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:50.421465   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:52.423495   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:54.921504   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:57.422176   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:40:59.423503   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:01.922877   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:03.923132   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:05.924374   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:08.422381   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:10.429579   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:12.921554   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:14.921664   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:16.921835   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:19.421476   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:21.422127   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:23.921572   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:25.921991   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:28.421928   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:30.422061   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:32.920954   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:34.921011   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:36.921182   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:38.921456   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:40.921831   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:43.421114   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:45.421643   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:47.422869   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:49.920872   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:51.921571   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:54.422383   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:56.922734   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:41:59.421026   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:01.422201   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:03.922291   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:06.421328   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:08.422222   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:10.422312   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:12.921027   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:14.921552   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:16.921763   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:18.922006   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:20.923075   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:23.421889   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:25.922273   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:28.423125   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:30.922362   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:33.422227   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:35.921346   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:38.421794   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:39.857192   42957 pod_ready.go:81] duration metric: took 4m0.000984137s waiting for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	E0919 17:42:39.857235   42957 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:42:39.857244   42957 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:42:41.876965   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:43.888180   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:46.376263   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:48.376923   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:50.378297   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:52.878797   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:55.376648   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:57.377288   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:42:59.377506   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:01.877466   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:04.375899   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:06.377125   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:08.385032   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:10.877372   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:12.877913   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:15.376877   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:17.377168   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:19.876635   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:21.877310   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:24.377510   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:26.876453   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:28.877691   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:31.378134   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:33.878580   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:36.375767   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:38.379611   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:40.877240   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:42.883891   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:45.380816   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:47.878580   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:50.377009   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:52.378152   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:54.876756   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:56.877611   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:59.378184   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:01.876872   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:03.877276   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:05.877317   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:07.877444   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:09.878968   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:12.376921   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:14.876969   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:17.376230   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:19.377830   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:21.877906   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:23.878491   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:25.879531   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:28.375826   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:30.377657   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:32.878209   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:35.376578   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:37.377019   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:39.877984   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:41.881884   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:44.377586   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:46.377884   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:48.877801   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:51.377416   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:53.876874   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:55.876962   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:57.877638   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:59.878164   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:01.879235   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:04.377757   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:06.877198   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:08.877418   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:10.878338   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:13.376800   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:15.877398   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:17.878181   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:20.376596   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:22.376944   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:24.878244   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:27.377379   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:29.876657   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:31.876765   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:33.877071   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:36.376822   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:38.877479   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:41.376856   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:43.377597   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:45.877024   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:48.377283   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:50.377961   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:52.877056   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:54.877674   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:57.377616   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:59.876532   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:01.876764   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:03.878819   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:06.377546   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:08.877406   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:10.877528   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:12.877952   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:15.377993   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:17.875912   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:19.877080   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:22.377093   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:24.878340   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:27.377076   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:29.880578   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:32.378323   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:34.877137   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:37.380435   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:39.857402   42957 pod_ready.go:81] duration metric: took 4m0.000143248s waiting for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	E0919 17:46:39.857441   42957 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:46:39.857470   42957 pod_ready.go:38] duration metric: took 8m0.083821212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:46:39.859655   42957 out.go:177] 
	W0919 17:46:39.861310   42957 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: extra waiting: timed out waiting 6m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: extra waiting: timed out waiting 6m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0919 17:46:39.861333   42957 out.go:239] * 
	* 
	W0919 17:46:39.862218   42957 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:46:39.863771   42957 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-100627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-100627 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-100627 logs -n 25: (1.084277624s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-648984 sudo crio                             | cilium-648984                | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC |                     |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p cilium-648984                                       | cilium-648984                | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC | 19 Sep 23 17:36 UTC |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-512928 ssh                                | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-512928 -- sudo                         | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-512928                                 | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-367630                            | force-systemd-env-367630     | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:43:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:43:53.143776   46282 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:43:53.143989   46282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:43:53.143999   46282 out.go:309] Setting ErrFile to fd 2...
	I0919 17:43:53.144004   46282 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:43:53.144206   46282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:43:53.144773   46282 out.go:303] Setting JSON to false
	I0919 17:43:53.145692   46282 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5183,"bootTime":1695140250,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:43:53.145746   46282 start.go:138] virtualization: kvm guest
	I0919 17:43:53.147852   46282 out.go:177] * [default-k8s-diff-port-415555] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:43:53.149321   46282 notify.go:220] Checking for updates...
	I0919 17:43:53.149323   46282 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:43:53.150664   46282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:43:53.151992   46282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:43:53.153130   46282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:43:53.154338   46282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:43:53.155518   46282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:43:53.157161   46282 config.go:182] Loaded profile config "default-k8s-diff-port-415555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:43:53.157518   46282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:43:53.157579   46282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:43:53.174263   46282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I0919 17:43:53.174724   46282 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:43:53.175213   46282 main.go:141] libmachine: Using API Version  1
	I0919 17:43:53.175236   46282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:43:53.175561   46282 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:43:53.175741   46282 main.go:141] libmachine: (default-k8s-diff-port-415555) Calling .DriverName
	I0919 17:43:53.175946   46282 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:43:53.176264   46282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:43:53.176289   46282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:43:53.190580   46282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0919 17:43:53.190951   46282 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:43:53.191398   46282 main.go:141] libmachine: Using API Version  1
	I0919 17:43:53.191426   46282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:43:53.191709   46282 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:43:53.191859   46282 main.go:141] libmachine: (default-k8s-diff-port-415555) Calling .DriverName
	I0919 17:43:53.227664   46282 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:43:53.229039   46282 start.go:298] selected driver: kvm2
	I0919 17:43:53.229054   46282 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-415555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-415555 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.228 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:43:53.229153   46282 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:43:53.229787   46282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:43:53.229863   46282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:43:53.244494   46282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:43:53.244985   46282 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:43:53.245035   46282 cni.go:84] Creating CNI manager for ""
	I0919 17:43:53.245053   46282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:43:53.245074   46282 start_flags.go:321] config:
	{Name:default-k8s-diff-port-415555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-41555
5 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.228 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:43:53.245267   46282 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:43:53.247015   46282 out.go:177] * Starting control plane node default-k8s-diff-port-415555 in cluster default-k8s-diff-port-415555
	I0919 17:43:51.156669   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:43:52.378152   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:54.876756   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:56.877611   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:43:53.248233   46282 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 17:43:53.248265   46282 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 17:43:53.248275   46282 cache.go:57] Caching tarball of preloaded images
	I0919 17:43:53.248361   46282 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:43:53.248372   46282 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 17:43:53.248543   46282 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/config.json ...
	I0919 17:43:53.248757   46282 start.go:365] acquiring machines lock for default-k8s-diff-port-415555: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:43:57.236656   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:00.308642   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:43:59.378184   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:01.876872   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:03.877276   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:05.877317   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:06.388660   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:09.460728   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:07.877444   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:09.878968   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:12.376921   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:14.876969   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:15.540630   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:18.612632   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:17.376230   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:19.377830   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:21.877906   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:24.692718   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:23.878491   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:25.879531   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:27.764733   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:28.375826   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:30.377657   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:33.844621   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:32.878209   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:35.376578   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:36.916606   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:37.377019   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:39.877984   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:41.881884   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:43.000617   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:44.377586   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:46.377884   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:46.068692   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:48.877801   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:51.377416   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:52.148692   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:55.220700   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:44:53.876874   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:55.876962   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:57.877638   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:44:59.878164   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:01.879235   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:01.300709   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:04.372652   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:04.377757   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:06.877198   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:10.452748   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:08.877418   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:10.878338   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:13.524717   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:13.376800   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:15.877398   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:19.604627   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:17.878181   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:20.376596   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:22.676664   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:22.376944   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:24.878244   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:28.756666   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:27.377379   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:29.876657   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:31.876765   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:31.828681   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:33.877071   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:36.376822   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:37.908665   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:38.877479   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:41.376856   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:40.980679   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:43.377597   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:45.877024   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:47.060675   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:50.132695   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:48.377283   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:50.377961   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:52.877056   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:54.877674   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:56.212702   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:59.284606   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:45:57.377616   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:45:59.876532   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:01.876764   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:05.364635   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:03.878819   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:06.377546   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:08.436674   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:08.877406   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:10.877528   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:14.516652   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:12.877952   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:15.377993   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:17.588657   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:17.875912   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:19.877080   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:23.668636   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:22.377093   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:24.878340   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:26.740703   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:27.377076   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:29.880578   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:32.820616   45696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.6:22: connect: no route to host
	I0919 17:46:32.378323   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:34.877137   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:37.380435   42957 pod_ready.go:102] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:46:39.857402   42957 pod_ready.go:81] duration metric: took 4m0.000143248s waiting for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	E0919 17:46:39.857441   42957 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:46:39.857470   42957 pod_ready.go:38] duration metric: took 8m0.083821212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:46:39.859655   42957 out.go:177] 
	W0919 17:46:39.861310   42957 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: extra waiting: timed out waiting 6m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0919 17:46:39.861333   42957 out.go:239] * 
	W0919 17:46:39.862218   42957 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:46:39.863771   42957 out.go:177] 
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:37:46 UTC, ends at Tue 2023-09-19 17:46:40 UTC. --
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.534788400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695145600534775519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:106105,},InodesUsed:&UInt64Value{Value:59,},},},}" file="go-grpc-middleware/chain.go:25" id=ef08be2c-8b6a-46c4-abc4-afe7c6a2f795 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.535686915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b7a275a-3d1b-4885-b7ac-0f61b89de68f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.535736681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b7a275a-3d1b-4885-b7ac-0f61b89de68f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.535967942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd3a2ff2dd02e68c9b36e67c2c8dbc23f784290a9e267a61863b3333bfe1841b,PodSandboxId:315fa8a919150756461960541a15bd0ee34b4f7bd648ce66f7b675f55d3f0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145121868019830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e5e0c9-0453-440b-aa5c-e6811f428297,},Annotations:map[string]string{io.kubernetes.container.hash: c35ed1a9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5569a3d30ac5e669b8c8e01adb637c6964c5cc884711f295389e897409ee82b7,PodSandboxId:9fbd081ea40c0c09858c664ed414c5f2be3cb9735c48045c6137f23fd3829b15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695145121375501629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7kqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79381ec1-45a7-4424-8383-f97b530979d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ba77119,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0658611b1a00c3a12d3672173ded580b9dc0251c9b76468f95f5710ae53d6c1d,PodSandboxId:5d88ab4de24ffd6b21267bb2352bf505dd1247b93d50f22d4cb2e00d86791427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120570775835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-wqwp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8756ca49-2953-422d-a534-6d1fa5655fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6545ae30fe659fbc649a68019833b3b9678bfaa082261d4c173519d92cc5983b,PodSandboxId:253eecfb31e2f4bee63f0d50aec37451f8e4fab1f1ad242114c19ade5ae93b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120635632430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4mh4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382ef590-a6ef-4402-8762-1649f060fbc4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd79e316d5ce66d282cf16f6ff2fd0e30a9c1588bc2025d86073f3ede3044a4,PodSandboxId:15fce98f19e25215d034ad81e017f798c0e49d20882abdd7a9faf115398e4d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695145096348692277,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b50e541945c0acfe76f2984549e3bab1be1c9b1bda344701ac6a759e922772,PodSandboxId:386f3a0175c04ffcd2289f5a9b1480b70355711ebd73ee4b0995ea5bcb3f01cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695145094934102850,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfbf825cc4c1f6a4592aafb2ce7ca1ec620ce2fb0fb00cc80254a5c5208f3bf,PodSandboxId:757f7d97ffec6eec36c4ffc497210c15387eeb9c05ffb3ecbf7c7c949b3b5d9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695145094708352030,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782cd602a9a0a32020d1c1e29d23b93a30d0d1d4d18686ac5fb908d92733b171,PodSandboxId:e0a8651c19fd4a234a597dcc2608b414f3b275411ae0e086c143b3ffd9b8d22d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695145094536317875,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b7a275a-3d1b-4885-b7ac-0f61b89de68f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.577280394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=471f6b35-5fa4-40b8-9781-82db10fa5159 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.577334387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=471f6b35-5fa4-40b8-9781-82db10fa5159 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.578450748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e4befdc2-09a5-420b-8a9d-9d9493b41e1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.578946714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695145600578851668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:106105,},InodesUsed:&UInt64Value{Value:59,},},},}" file="go-grpc-middleware/chain.go:25" id=e4befdc2-09a5-420b-8a9d-9d9493b41e1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.579846894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8888c7d5-bd1a-4c8c-a68d-e16eeb178dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.579976757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8888c7d5-bd1a-4c8c-a68d-e16eeb178dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.580141367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd3a2ff2dd02e68c9b36e67c2c8dbc23f784290a9e267a61863b3333bfe1841b,PodSandboxId:315fa8a919150756461960541a15bd0ee34b4f7bd648ce66f7b675f55d3f0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145121868019830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e5e0c9-0453-440b-aa5c-e6811f428297,},Annotations:map[string]string{io.kubernetes.container.hash: c35ed1a9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5569a3d30ac5e669b8c8e01adb637c6964c5cc884711f295389e897409ee82b7,PodSandboxId:9fbd081ea40c0c09858c664ed414c5f2be3cb9735c48045c6137f23fd3829b15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695145121375501629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7kqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79381ec1-45a7-4424-8383-f97b530979d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ba77119,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0658611b1a00c3a12d3672173ded580b9dc0251c9b76468f95f5710ae53d6c1d,PodSandboxId:5d88ab4de24ffd6b21267bb2352bf505dd1247b93d50f22d4cb2e00d86791427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120570775835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-wqwp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8756ca49-2953-422d-a534-6d1fa5655fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6545ae30fe659fbc649a68019833b3b9678bfaa082261d4c173519d92cc5983b,PodSandboxId:253eecfb31e2f4bee63f0d50aec37451f8e4fab1f1ad242114c19ade5ae93b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120635632430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4mh4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382ef590-a6ef-4402-8762-1649f060fbc4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd79e316d5ce66d282cf16f6ff2fd0e30a9c1588bc2025d86073f3ede3044a4,PodSandboxId:15fce98f19e25215d034ad81e017f798c0e49d20882abdd7a9faf115398e4d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695145096348692277,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b50e541945c0acfe76f2984549e3bab1be1c9b1bda344701ac6a759e922772,PodSandboxId:386f3a0175c04ffcd2289f5a9b1480b70355711ebd73ee4b0995ea5bcb3f01cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695145094934102850,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfbf825cc4c1f6a4592aafb2ce7ca1ec620ce2fb0fb00cc80254a5c5208f3bf,PodSandboxId:757f7d97ffec6eec36c4ffc497210c15387eeb9c05ffb3ecbf7c7c949b3b5d9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695145094708352030,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782cd602a9a0a32020d1c1e29d23b93a30d0d1d4d18686ac5fb908d92733b171,PodSandboxId:e0a8651c19fd4a234a597dcc2608b414f3b275411ae0e086c143b3ffd9b8d22d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695145094536317875,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8888c7d5-bd1a-4c8c-a68d-e16eeb178dae name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.625122117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2f4c0832-6bb1-4655-b9bb-7873a0f00333 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.625178635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2f4c0832-6bb1-4655-b9bb-7873a0f00333 name=/runtime.v1.RuntimeService/Version
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.626981486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab53b6e7-85f3-49ae-bf57-c749c543f80a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.627325941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695145600627313052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:106105,},InodesUsed:&UInt64Value{Value:59,},},},}" file="go-grpc-middleware/chain.go:25" id=ab53b6e7-85f3-49ae-bf57-c749c543f80a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.627972408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9a7b17d-4733-46a7-b2de-580f82fa9579 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.628023355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9a7b17d-4733-46a7-b2de-580f82fa9579 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.628194831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd3a2ff2dd02e68c9b36e67c2c8dbc23f784290a9e267a61863b3333bfe1841b,PodSandboxId:315fa8a919150756461960541a15bd0ee34b4f7bd648ce66f7b675f55d3f0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145121868019830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e5e0c9-0453-440b-aa5c-e6811f428297,},Annotations:map[string]string{io.kubernetes.container.hash: c35ed1a9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5569a3d30ac5e669b8c8e01adb637c6964c5cc884711f295389e897409ee82b7,PodSandboxId:9fbd081ea40c0c09858c664ed414c5f2be3cb9735c48045c6137f23fd3829b15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695145121375501629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7kqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79381ec1-45a7-4424-8383-f97b530979d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ba77119,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0658611b1a00c3a12d3672173ded580b9dc0251c9b76468f95f5710ae53d6c1d,PodSandboxId:5d88ab4de24ffd6b21267bb2352bf505dd1247b93d50f22d4cb2e00d86791427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120570775835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-wqwp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8756ca49-2953-422d-a534-6d1fa5655fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6545ae30fe659fbc649a68019833b3b9678bfaa082261d4c173519d92cc5983b,PodSandboxId:253eecfb31e2f4bee63f0d50aec37451f8e4fab1f1ad242114c19ade5ae93b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120635632430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4mh4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382ef590-a6ef-4402-8762-1649f060fbc4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd79e316d5ce66d282cf16f6ff2fd0e30a9c1588bc2025d86073f3ede3044a4,PodSandboxId:15fce98f19e25215d034ad81e017f798c0e49d20882abdd7a9faf115398e4d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695145096348692277,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b50e541945c0acfe76f2984549e3bab1be1c9b1bda344701ac6a759e922772,PodSandboxId:386f3a0175c04ffcd2289f5a9b1480b70355711ebd73ee4b0995ea5bcb3f01cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695145094934102850,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfbf825cc4c1f6a4592aafb2ce7ca1ec620ce2fb0fb00cc80254a5c5208f3bf,PodSandboxId:757f7d97ffec6eec36c4ffc497210c15387eeb9c05ffb3ecbf7c7c949b3b5d9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695145094708352030,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782cd602a9a0a32020d1c1e29d23b93a30d0d1d4d18686ac5fb908d92733b171,PodSandboxId:e0a8651c19fd4a234a597dcc2608b414f3b275411ae0e086c143b3ffd9b8d22d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695145094536317875,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9a7b17d-4733-46a7-b2de-580f82fa9579 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.667177547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=22c9fe91-78a4-4c5e-a705-1aa9d5af99bb name=/runtime.v1.RuntimeService/Version
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.667235430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=22c9fe91-78a4-4c5e-a705-1aa9d5af99bb name=/runtime.v1.RuntimeService/Version
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.669097896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fb8264e8-9394-4fad-b306-145e9b5f9571 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.669428646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695145600669418735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:106105,},InodesUsed:&UInt64Value{Value:59,},},},}" file="go-grpc-middleware/chain.go:25" id=fb8264e8-9394-4fad-b306-145e9b5f9571 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.670147472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e44360f-9c7c-4e83-97f3-a51602b83991 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.670193187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9e44360f-9c7c-4e83-97f3-a51602b83991 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 17:46:40 old-k8s-version-100627 crio[715]: time="2023-09-19 17:46:40.670351365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd3a2ff2dd02e68c9b36e67c2c8dbc23f784290a9e267a61863b3333bfe1841b,PodSandboxId:315fa8a919150756461960541a15bd0ee34b4f7bd648ce66f7b675f55d3f0969,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145121868019830,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00e5e0c9-0453-440b-aa5c-e6811f428297,},Annotations:map[string]string{io.kubernetes.container.hash: c35ed1a9,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5569a3d30ac5e669b8c8e01adb637c6964c5cc884711f295389e897409ee82b7,PodSandboxId:9fbd081ea40c0c09858c664ed414c5f2be3cb9735c48045c6137f23fd3829b15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695145121375501629,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j7kqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79381ec1-45a7-4424-8383-f97b530979d3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ba77119,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0658611b1a00c3a12d3672173ded580b9dc0251c9b76468f95f5710ae53d6c1d,PodSandboxId:5d88ab4de24ffd6b21267bb2352bf505dd1247b93d50f22d4cb2e00d86791427,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120570775835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-wqwp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8756ca49-2953-422d-a534-6d1fa5655fbb,},Annotations:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6545ae30fe659fbc649a68019833b3b9678bfaa082261d4c173519d92cc5983b,PodSandboxId:253eecfb31e2f4bee63f0d50aec37451f8e4fab1f1ad242114c19ade5ae93b38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695145120635632430,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-4mh4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 382ef590-a6ef-4402-8762-1649f060fbc4,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 271b83d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccd79e316d5ce66d282cf16f6ff2fd0e30a9c1588bc2025d86073f3ede3044a4,PodSandboxId:15fce98f19e25215d034ad81e017f798c0e49d20882abdd7a9faf115398e4d9d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695145096348692277,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[string]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b50e541945c0acfe76f2984549e3bab1be1c9b1bda344701ac6a759e922772,PodSandboxId:386f3a0175c04ffcd2289f5a9b1480b70355711ebd73ee4b0995ea5bcb3f01cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695145094934102850,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dfbf825cc4c1f6a4592aafb2ce7ca1ec620ce2fb0fb00cc80254a5c5208f3bf,PodSandboxId:757f7d97ffec6eec36c4ffc497210c15387eeb9c05ffb3ecbf7c7c949b3b5d9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695145094708352030,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782cd602a9a0a32020d1c1e29d23b93a30d0d1d4d18686ac5fb908d92733b171,PodSandboxId:e0a8651c19fd4a234a597dcc2608b414f3b275411ae0e086c143b3ffd9b8d22d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695145094536317875,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9e44360f-9c7c-4e83-97f3-a51602b83991 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cd3a2ff2dd02e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 minutes ago       Running             storage-provisioner       0                   315fa8a919150       storage-provisioner
	5569a3d30ac5e       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   7 minutes ago       Running             kube-proxy                0                   9fbd081ea40c0       kube-proxy-j7kqn
	6545ae30fe659       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   8 minutes ago       Running             coredns                   0                   253eecfb31e2f       coredns-5644d7b6d9-4mh4f
	0658611b1a00c       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   8 minutes ago       Running             coredns                   0                   5d88ab4de24ff       coredns-5644d7b6d9-wqwp7
	ccd79e316d5ce       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   8 minutes ago       Running             etcd                      0                   15fce98f19e25       etcd-old-k8s-version-100627
	72b50e541945c       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   8 minutes ago       Running             kube-controller-manager   0                   386f3a0175c04       kube-controller-manager-old-k8s-version-100627
	1dfbf825cc4c1       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   8 minutes ago       Running             kube-scheduler            0                   757f7d97ffec6       kube-scheduler-old-k8s-version-100627
	782cd602a9a0a       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   8 minutes ago       Running             kube-apiserver            0                   e0a8651c19fd4       kube-apiserver-old-k8s-version-100627
	
	* 
	* ==> coredns [0658611b1a00c3a12d3672173ded580b9dc0251c9b76468f95f5710ae53d6c1d] <==
	* 2023-09-19T17:42:33.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:42:43.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:42:53.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:03.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:13.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:23.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:33.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:43.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:53.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:03.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:13.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:23.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:33.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:43.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:53.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:03.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:13.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:23.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:33.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:43.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:53.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:03.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:13.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:23.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:33.204Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [6545ae30fe659fbc649a68019833b3b9678bfaa082261d4c173519d92cc5983b] <==
	* 2023-09-19T17:42:36.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:42:46.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:42:56.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:06.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:16.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:26.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:36.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:46.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:43:56.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:06.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:16.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:26.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:36.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:46.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:44:56.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:06.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:16.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:26.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:36.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:46.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:45:56.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:06.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:16.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:26.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:46:36.530Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-100627
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-100627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=old-k8s-version-100627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_38_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:38:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:46:30 +0000   Tue, 19 Sep 2023 17:38:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:46:30 +0000   Tue, 19 Sep 2023 17:38:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:46:30 +0000   Tue, 19 Sep 2023 17:38:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:46:30 +0000   Tue, 19 Sep 2023 17:38:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.182
	  Hostname:    old-k8s-version-100627
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 60319d8735584b7ab7f14de7c68a7260
	 System UUID:                60319d87-3558-4b7a-b7f1-4de7c68a7260
	 Boot ID:                    bee3bafc-3dc6-4b62-9560-f22cc2d7bbea
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-4mh4f                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m1s
	  kube-system                coredns-5644d7b6d9-wqwp7                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m1s
	  kube-system                etcd-old-k8s-version-100627                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                kube-apiserver-old-k8s-version-100627             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                kube-controller-manager-old-k8s-version-100627    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                kube-proxy-j7kqn                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                kube-scheduler-old-k8s-version-100627             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (6%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  8m27s (x8 over 8m27s)  kubelet, old-k8s-version-100627     Node old-k8s-version-100627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x8 over 8m27s)  kubelet, old-k8s-version-100627     Node old-k8s-version-100627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x7 over 8m27s)  kubelet, old-k8s-version-100627     Node old-k8s-version-100627 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m59s                  kube-proxy, old-k8s-version-100627  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep19 17:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.078577] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.542832] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.473933] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.149977] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.113853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.810829] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.111629] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.148622] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.104170] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.231359] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Sep19 17:38] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
	[  +2.377261] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep19 17:39] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [ccd79e316d5ce66d282cf16f6ff2fd0e30a9c1588bc2025d86073f3ede3044a4] <==
	* 2023-09-19 17:38:16.463365 I | etcdserver: ff4c26660998c2c8 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-19 17:38:16.463804 I | etcdserver/membership: added member ff4c26660998c2c8 [https://192.168.72.182:2380] to cluster 1c15affd5c0f3dba
	2023-09-19 17:38:16.465161 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-19 17:38:16.465335 I | embed: listening for metrics on http://192.168.72.182:2381
	2023-09-19 17:38:16.465435 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-19 17:38:16.550150 I | raft: ff4c26660998c2c8 is starting a new election at term 1
	2023-09-19 17:38:16.550253 I | raft: ff4c26660998c2c8 became candidate at term 2
	2023-09-19 17:38:16.550290 I | raft: ff4c26660998c2c8 received MsgVoteResp from ff4c26660998c2c8 at term 2
	2023-09-19 17:38:16.550320 I | raft: ff4c26660998c2c8 became leader at term 2
	2023-09-19 17:38:16.550344 I | raft: raft.node: ff4c26660998c2c8 elected leader ff4c26660998c2c8 at term 2
	2023-09-19 17:38:16.550670 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-19 17:38:16.552627 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-19 17:38:16.552797 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-19 17:38:16.553137 I | etcdserver: published {Name:old-k8s-version-100627 ClientURLs:[https://192.168.72.182:2379]} to cluster 1c15affd5c0f3dba
	2023-09-19 17:38:16.553250 I | embed: ready to serve client requests
	2023-09-19 17:38:16.556326 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-19 17:38:16.556626 I | embed: ready to serve client requests
	2023-09-19 17:38:16.560592 I | embed: serving client requests on 192.168.72.182:2379
	2023-09-19 17:38:30.156692 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (240.921568ms) to execute
	2023-09-19 17:38:38.176075 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (242.448456ms) to execute
	2023-09-19 17:38:38.176239 W | etcdserver: request "header:<ID:14035620720868131846 > lease_revoke:<id:42c88aae857502ee>" with result "size:28" took too long (207.665995ms) to execute
	2023-09-19 17:38:38.176474 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" " with result "range_response_count:0 size:5" took too long (176.188393ms) to execute
	2023-09-19 17:38:38.884670 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:263" took too long (134.704768ms) to execute
	2023-09-19 17:38:56.796009 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (228.124951ms) to execute
	2023-09-19 17:40:38.862535 W | etcdserver: read-only range request "key:\"/registry/storageclasses\" range_end:\"/registry/storageclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (104.260758ms) to execute
	
	* 
	* ==> kernel <==
	*  17:46:41 up 9 min,  0 users,  load average: 0.23, 0.32, 0.19
	Linux old-k8s-version-100627 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [782cd602a9a0a32020d1c1e29d23b93a30d0d1d4d18686ac5fb908d92733b171] <==
	* I0919 17:38:19.621956       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 17:38:19.622047       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0919 17:38:19.622051       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
	I0919 17:38:19.689811       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:38:19.690019       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:38:19.690047       1 cache.go:39] Caches are synced for autoregister controller
	E0919 17:38:19.698125       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.72.182, ResourceVersion: 0, AdditionalErrorMsg: 
	I0919 17:38:19.723952       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0919 17:38:20.583216       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0919 17:38:20.583279       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0919 17:38:20.583295       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 17:38:20.615769       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0919 17:38:20.658919       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0919 17:38:20.658962       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0919 17:38:22.367159       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:38:22.647294       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0919 17:38:22.924945       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.72.182]
	I0919 17:38:22.925625       1 controller.go:606] quota admission added evaluator for: endpoints
	I0919 17:38:22.996358       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 17:38:23.903731       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0919 17:38:24.335373       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0919 17:38:24.666554       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0919 17:38:39.091965       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0919 17:38:39.155559       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0919 17:38:39.418724       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [72b50e541945c0acfe76f2984549e3bab1be1c9b1bda344701ac6a759e922772] <==
	* I0919 17:38:39.175439       1 range_allocator.go:359] Set node old-k8s-version-100627 PodCIDR to [10.244.0.0/24]
	E0919 17:38:39.183679       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0919 17:38:39.192328       1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0919 17:38:39.211033       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"e34d60ec-a40b-49ad-8c5d-48743f561cc4", ResourceVersion:"214", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830741904, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0010e9860), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00118a940), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0010e9880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0010e98a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0010e98e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000adf8b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001965768), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001186f60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000b2a48)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0019657a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0919 17:38:39.223158       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0919 17:38:39.226674       1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0919 17:38:39.232108       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"e34d60ec-a40b-49ad-8c5d-48743f561cc4", ResourceVersion:"302", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830741904, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0008eb5e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001234300), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0008eb640), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0008eb6a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0008eb740)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000c2d450), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ff0868), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f3d1a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0013ffc68)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000ff08d8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0919 17:38:39.391440       1 shared_informer.go:204] Caches are synced for certificate 
	I0919 17:38:39.393914       1 shared_informer.go:204] Caches are synced for certificate 
	I0919 17:38:39.413384       1 shared_informer.go:204] Caches are synced for deployment 
	I0919 17:38:39.424230       1 log.go:172] [INFO] signed certificate with serial number 562132878288470005005945764890910827329382968291
	I0919 17:38:39.442397       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"82b5bd0f-d42e-4be7-9b41-8c3c06d06980", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I0919 17:38:39.501447       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0919 17:38:39.504142       1 shared_informer.go:204] Caches are synced for disruption 
	I0919 17:38:39.504219       1 disruption.go:341] Sending events to api server.
	I0919 17:38:39.522146       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"a3b69ca5-82b2-424f-8386-c62c0930371e", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wqwp7
	I0919 17:38:39.565328       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"a3b69ca5-82b2-424f-8386-c62c0930371e", APIVersion:"apps/v1", ResourceVersion:"324", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-4mh4f
	I0919 17:38:39.600660       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0919 17:38:39.600723       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0919 17:38:39.604715       1 shared_informer.go:204] Caches are synced for endpoint 
	I0919 17:38:39.607121       1 shared_informer.go:204] Caches are synced for resource quota 
	I0919 17:38:39.642559       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0919 17:38:39.642739       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [5569a3d30ac5e669b8c8e01adb637c6964c5cc884711f295389e897409ee82b7] <==
	* W0919 17:38:41.705471       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0919 17:38:41.713542       1 node.go:135] Successfully retrieved node IP: 192.168.72.182
	I0919 17:38:41.713612       1 server_others.go:149] Using iptables Proxier.
	I0919 17:38:41.713981       1 server.go:529] Version: v1.16.0
	I0919 17:38:41.718100       1 config.go:131] Starting endpoints config controller
	I0919 17:38:41.718142       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0919 17:38:41.721155       1 config.go:313] Starting service config controller
	I0919 17:38:41.721202       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0919 17:38:41.821472       1 shared_informer.go:204] Caches are synced for service config 
	I0919 17:38:41.826316       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [1dfbf825cc4c1f6a4592aafb2ce7ca1ec620ce2fb0fb00cc80254a5c5208f3bf] <==
	* W0919 17:38:19.699076       1 authentication.go:79] Authentication is disabled
	I0919 17:38:19.699216       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0919 17:38:19.706919       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0919 17:38:19.752325       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:38:19.752463       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:38:19.752528       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:38:19.752573       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 17:38:19.752613       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:38:19.752848       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:38:19.753021       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:38:19.753146       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:38:19.753208       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:38:19.753620       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 17:38:19.763973       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 17:38:20.756203       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:38:20.756482       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:38:20.758258       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:38:20.759434       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 17:38:20.759511       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:38:20.761224       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:38:20.763344       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:38:20.763410       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:38:20.764428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:38:20.765352       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 17:38:20.766474       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:37:46 UTC, ends at Tue 2023-09-19 17:46:41 UTC. --
	Sep 19 17:38:20 old-k8s-version-100627 kubelet[1071]: E0919 17:38:20.140668    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9998f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node old-k8s-version-100627 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:
"kubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6a78f, ext:207354962, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a95015778bdc4, ext:270406268, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:20 old-k8s-version-100627 kubelet[1071]: E0919 17:38:20.196412    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9a6b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node old-k8s-version-100627 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"k
ubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6b4b9, ext:207358320, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a95015778c8d8, ext:270409104, loc:(*time.Location)(0x797f100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:20 old-k8s-version-100627 kubelet[1071]: E0919 17:38:20.260636    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab98230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node old-k8s-version-100627 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Compon
ent:"kubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b69030, ext:207348979, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9501638b7abe, ext:472960897, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:20 old-k8s-version-100627 kubelet[1071]: E0919 17:38:20.663848    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9998f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node old-k8s-version-100627 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:
"kubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6a78f, ext:207354962, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9501638b9a92, ext:472969044, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:21 old-k8s-version-100627 kubelet[1071]: E0919 17:38:21.061383    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9a6b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node old-k8s-version-100627 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"k
ubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6b4b9, ext:207358320, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9501638baa0d, ext:472973005, loc:(*time.Location)(0x797f100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:21 old-k8s-version-100627 kubelet[1071]: E0919 17:38:21.460799    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9a6b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node old-k8s-version-100627 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"k
ubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6b4b9, ext:207358320, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9501693f8d0a, ext:568648133, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:21 old-k8s-version-100627 kubelet[1071]: E0919 17:38:21.862600    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab98230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node old-k8s-version-100627 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Compon
ent:"kubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b69030, ext:207348979, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9501693f61d8, ext:568637074, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:22 old-k8s-version-100627 kubelet[1071]: E0919 17:38:22.261128    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9998f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node old-k8s-version-100627 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:
"kubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6a78f, ext:207354962, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a9501693f7d22, ext:568644070, loc:(*time.Location)(0x797f100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:22 old-k8s-version-100627 kubelet[1071]: E0919 17:38:22.661061    1071 event.go:237] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"old-k8s-version-100627.17865d7b2ab9a6b9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-100627", UID:"old-k8s-version-100627", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node old-k8s-version-100627 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"k
ubelet", Host:"old-k8s-version-100627"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a950153b6b4b9, ext:207358320, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a950169ac729d, ext:575784791, loc:(*time.Location)(0x797f100)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
	Sep 19 17:38:23 old-k8s-version-100627 kubelet[1071]: E0919 17:38:23.354035    1071 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.180818    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/79381ec1-45a7-4424-8383-f97b530979d3-lib-modules") pod "kube-proxy-j7kqn" (UID: "79381ec1-45a7-4424-8383-f97b530979d3")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.180945    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/79381ec1-45a7-4424-8383-f97b530979d3-kube-proxy") pod "kube-proxy-j7kqn" (UID: "79381ec1-45a7-4424-8383-f97b530979d3")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.181009    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-kpbrd" (UniqueName: "kubernetes.io/secret/79381ec1-45a7-4424-8383-f97b530979d3-kube-proxy-token-kpbrd") pod "kube-proxy-j7kqn" (UID: "79381ec1-45a7-4424-8383-f97b530979d3")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.181049    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/79381ec1-45a7-4424-8383-f97b530979d3-xtables-lock") pod "kube-proxy-j7kqn" (UID: "79381ec1-45a7-4424-8383-f97b530979d3")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.271329    1071 kuberuntime_manager.go:961] updating runtime config through cri with podcidr 10.244.0.0/24
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.272390    1071 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: W0919 17:38:39.498609    1071 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod79381ec1-45a7-4424-8383-f97b530979d3/crio-conmon-9fbd081ea40c0c09858c664ed414c5f2be3cb9735c48045c6137f23fd3829b15": none of the resources are being tracked.
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.582138    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8756ca49-2953-422d-a534-6d1fa5655fbb-config-volume") pod "coredns-5644d7b6d9-wqwp7" (UID: "8756ca49-2953-422d-a534-6d1fa5655fbb")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.582172    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8dhqs" (UniqueName: "kubernetes.io/secret/8756ca49-2953-422d-a534-6d1fa5655fbb-coredns-token-8dhqs") pod "coredns-5644d7b6d9-wqwp7" (UID: "8756ca49-2953-422d-a534-6d1fa5655fbb")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.682509    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/382ef590-a6ef-4402-8762-1649f060fbc4-config-volume") pod "coredns-5644d7b6d9-4mh4f" (UID: "382ef590-a6ef-4402-8762-1649f060fbc4")
	Sep 19 17:38:39 old-k8s-version-100627 kubelet[1071]: I0919 17:38:39.682586    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8dhqs" (UniqueName: "kubernetes.io/secret/382ef590-a6ef-4402-8762-1649f060fbc4-coredns-token-8dhqs") pod "coredns-5644d7b6d9-4mh4f" (UID: "382ef590-a6ef-4402-8762-1649f060fbc4")
	Sep 19 17:38:41 old-k8s-version-100627 kubelet[1071]: I0919 17:38:41.087133    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/00e5e0c9-0453-440b-aa5c-e6811f428297-tmp") pod "storage-provisioner" (UID: "00e5e0c9-0453-440b-aa5c-e6811f428297")
	Sep 19 17:38:41 old-k8s-version-100627 kubelet[1071]: I0919 17:38:41.087243    1071 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-8wj7g" (UniqueName: "kubernetes.io/secret/00e5e0c9-0453-440b-aa5c-e6811f428297-storage-provisioner-token-8wj7g") pod "storage-provisioner" (UID: "00e5e0c9-0453-440b-aa5c-e6811f428297")
	Sep 19 17:38:43 old-k8s-version-100627 kubelet[1071]: I0919 17:38:43.222180    1071 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
	Sep 19 17:43:13 old-k8s-version-100627 kubelet[1071]: E0919 17:43:13.353055    1071 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-100627 -n old-k8s-version-100627
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-100627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (584.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-415155 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-415155 --alsologtostderr -v=3: exit status 82 (2m1.770994946s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-415155"  ...
	* Stopping node "embed-certs-415155"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:39:37.905783   44621 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:39:37.906105   44621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:39:37.906116   44621 out.go:309] Setting ErrFile to fd 2...
	I0919 17:39:37.906121   44621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:39:37.906420   44621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:39:37.906693   44621 out.go:303] Setting JSON to false
	I0919 17:39:37.906765   44621 mustload.go:65] Loading cluster: embed-certs-415155
	I0919 17:39:37.907108   44621 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:39:37.907175   44621 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/embed-certs-415155/config.json ...
	I0919 17:39:37.907312   44621 mustload.go:65] Loading cluster: embed-certs-415155
	I0919 17:39:37.907406   44621 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:39:37.907428   44621 stop.go:39] StopHost: embed-certs-415155
	I0919 17:39:37.907906   44621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:39:37.907978   44621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:39:37.925082   44621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0919 17:39:37.925569   44621 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:39:37.927000   44621 main.go:141] libmachine: Using API Version  1
	I0919 17:39:37.927033   44621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:39:37.927359   44621 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:39:37.929789   44621 out.go:177] * Stopping node "embed-certs-415155"  ...
	I0919 17:39:37.931303   44621 main.go:141] libmachine: Stopping "embed-certs-415155"...
	I0919 17:39:37.931320   44621 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:39:37.932949   44621 main.go:141] libmachine: (embed-certs-415155) Calling .Stop
	I0919 17:39:37.936993   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 0/60
	I0919 17:39:38.938870   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 1/60
	I0919 17:39:39.940244   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 2/60
	I0919 17:39:40.941939   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 3/60
	I0919 17:39:41.943308   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 4/60
	I0919 17:39:42.945479   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 5/60
	I0919 17:39:43.947295   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 6/60
	I0919 17:39:44.949328   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 7/60
	I0919 17:39:45.951131   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 8/60
	I0919 17:39:46.953291   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 9/60
	I0919 17:39:47.954836   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 10/60
	I0919 17:39:48.956206   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 11/60
	I0919 17:39:49.957583   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 12/60
	I0919 17:39:50.959104   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 13/60
	I0919 17:39:51.960569   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 14/60
	I0919 17:39:52.962438   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 15/60
	I0919 17:39:53.963885   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 16/60
	I0919 17:39:54.965297   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 17/60
	I0919 17:39:55.967011   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 18/60
	I0919 17:39:56.968241   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 19/60
	I0919 17:39:57.970012   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 20/60
	I0919 17:39:58.972306   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 21/60
	I0919 17:39:59.974302   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 22/60
	I0919 17:40:00.975834   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 23/60
	I0919 17:40:01.977329   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 24/60
	I0919 17:40:03.048804   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 25/60
	I0919 17:40:04.051046   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 26/60
	I0919 17:40:05.052631   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 27/60
	I0919 17:40:06.055059   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 28/60
	I0919 17:40:07.056520   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 29/60
	I0919 17:40:08.058621   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 30/60
	I0919 17:40:09.060196   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 31/60
	I0919 17:40:10.061813   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 32/60
	I0919 17:40:11.063932   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 33/60
	I0919 17:40:12.065687   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 34/60
	I0919 17:40:13.067812   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 35/60
	I0919 17:40:14.069259   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 36/60
	I0919 17:40:15.070814   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 37/60
	I0919 17:40:16.072182   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 38/60
	I0919 17:40:17.073602   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 39/60
	I0919 17:40:18.075676   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 40/60
	I0919 17:40:19.076988   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 41/60
	I0919 17:40:20.078414   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 42/60
	I0919 17:40:21.079812   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 43/60
	I0919 17:40:22.081149   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 44/60
	I0919 17:40:23.082876   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 45/60
	I0919 17:40:24.084212   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 46/60
	I0919 17:40:25.085459   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 47/60
	I0919 17:40:26.087072   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 48/60
	I0919 17:40:27.088320   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 49/60
	I0919 17:40:28.090374   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 50/60
	I0919 17:40:29.091627   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 51/60
	I0919 17:40:30.092981   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 52/60
	I0919 17:40:31.094688   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 53/60
	I0919 17:40:32.095948   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 54/60
	I0919 17:40:33.097685   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 55/60
	I0919 17:40:34.099295   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 56/60
	I0919 17:40:35.101541   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 57/60
	I0919 17:40:36.102937   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 58/60
	I0919 17:40:37.104635   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 59/60
	I0919 17:40:38.105766   44621 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:40:38.105816   44621 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:40:38.105831   44621 retry.go:31] will retry after 1.404332437s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:40:39.511351   44621 stop.go:39] StopHost: embed-certs-415155
	I0919 17:40:39.511754   44621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:40:39.511806   44621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:40:39.526205   44621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0919 17:40:39.526662   44621 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:40:39.527132   44621 main.go:141] libmachine: Using API Version  1
	I0919 17:40:39.527153   44621 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:40:39.527505   44621 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:40:39.529738   44621 out.go:177] * Stopping node "embed-certs-415155"  ...
	I0919 17:40:39.531198   44621 main.go:141] libmachine: Stopping "embed-certs-415155"...
	I0919 17:40:39.531217   44621 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:40:39.532870   44621 main.go:141] libmachine: (embed-certs-415155) Calling .Stop
	I0919 17:40:39.536009   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 0/60
	I0919 17:40:40.537444   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 1/60
	I0919 17:40:41.538784   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 2/60
	I0919 17:40:42.540045   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 3/60
	I0919 17:40:43.541636   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 4/60
	I0919 17:40:44.543353   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 5/60
	I0919 17:40:45.545824   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 6/60
	I0919 17:40:46.547278   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 7/60
	I0919 17:40:47.548740   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 8/60
	I0919 17:40:48.550740   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 9/60
	I0919 17:40:49.552582   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 10/60
	I0919 17:40:50.554996   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 11/60
	I0919 17:40:51.556199   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 12/60
	I0919 17:40:52.557393   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 13/60
	I0919 17:40:53.558841   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 14/60
	I0919 17:40:54.560683   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 15/60
	I0919 17:40:55.561972   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 16/60
	I0919 17:40:56.563464   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 17/60
	I0919 17:40:57.565008   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 18/60
	I0919 17:40:58.566481   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 19/60
	I0919 17:40:59.568742   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 20/60
	I0919 17:41:00.570048   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 21/60
	I0919 17:41:01.571470   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 22/60
	I0919 17:41:02.572787   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 23/60
	I0919 17:41:03.574226   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 24/60
	I0919 17:41:04.576208   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 25/60
	I0919 17:41:05.577489   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 26/60
	I0919 17:41:06.578978   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 27/60
	I0919 17:41:07.580467   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 28/60
	I0919 17:41:08.581769   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 29/60
	I0919 17:41:09.583640   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 30/60
	I0919 17:41:10.585356   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 31/60
	I0919 17:41:11.587609   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 32/60
	I0919 17:41:12.588929   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 33/60
	I0919 17:41:13.590974   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 34/60
	I0919 17:41:14.592860   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 35/60
	I0919 17:41:15.594307   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 36/60
	I0919 17:41:16.595605   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 37/60
	I0919 17:41:17.598076   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 38/60
	I0919 17:41:18.599432   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 39/60
	I0919 17:41:19.601154   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 40/60
	I0919 17:41:20.602895   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 41/60
	I0919 17:41:21.604507   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 42/60
	I0919 17:41:22.605724   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 43/60
	I0919 17:41:23.606992   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 44/60
	I0919 17:41:24.608593   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 45/60
	I0919 17:41:25.609929   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 46/60
	I0919 17:41:26.611266   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 47/60
	I0919 17:41:27.612625   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 48/60
	I0919 17:41:28.613994   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 49/60
	I0919 17:41:29.615901   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 50/60
	I0919 17:41:30.617162   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 51/60
	I0919 17:41:31.618367   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 52/60
	I0919 17:41:32.619725   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 53/60
	I0919 17:41:33.621032   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 54/60
	I0919 17:41:34.622220   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 55/60
	I0919 17:41:35.623569   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 56/60
	I0919 17:41:36.625025   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 57/60
	I0919 17:41:37.626744   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 58/60
	I0919 17:41:38.628032   44621 main.go:141] libmachine: (embed-certs-415155) Waiting for machine to stop 59/60
	I0919 17:41:39.628981   44621 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:41:39.629031   44621 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:41:39.631140   44621 out.go:177] 
	W0919 17:41:39.632665   44621 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0919 17:41:39.632677   44621 out.go:239] * 
	* 
	W0919 17:41:39.634853   44621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:41:39.636265   44621 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-415155 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155: exit status 3 (18.430423813s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:41:58.068694   45517 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host
	E0919 17:41:58.068716   45517 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-415155" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-215748 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-215748 --alsologtostderr -v=3: exit status 82 (2m1.414689675s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-215748"  ...
	* Stopping node "no-preload-215748"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:40:11.328111   45083 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:40:11.328214   45083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:40:11.328224   45083 out.go:309] Setting ErrFile to fd 2...
	I0919 17:40:11.328228   45083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:40:11.328461   45083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:40:11.328733   45083 out.go:303] Setting JSON to false
	I0919 17:40:11.328822   45083 mustload.go:65] Loading cluster: no-preload-215748
	I0919 17:40:11.329164   45083 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:40:11.329238   45083 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/config.json ...
	I0919 17:40:11.329405   45083 mustload.go:65] Loading cluster: no-preload-215748
	I0919 17:40:11.329539   45083 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:40:11.329579   45083 stop.go:39] StopHost: no-preload-215748
	I0919 17:40:11.330000   45083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:40:11.330062   45083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:40:11.345600   45083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I0919 17:40:11.346017   45083 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:40:11.346604   45083 main.go:141] libmachine: Using API Version  1
	I0919 17:40:11.346627   45083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:40:11.346959   45083 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:40:11.350039   45083 out.go:177] * Stopping node "no-preload-215748"  ...
	I0919 17:40:11.351370   45083 main.go:141] libmachine: Stopping "no-preload-215748"...
	I0919 17:40:11.351386   45083 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:40:11.353107   45083 main.go:141] libmachine: (no-preload-215748) Calling .Stop
	I0919 17:40:11.356797   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 0/60
	I0919 17:40:12.359115   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 1/60
	I0919 17:40:13.360798   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 2/60
	I0919 17:40:14.362103   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 3/60
	I0919 17:40:15.363370   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 4/60
	I0919 17:40:16.365370   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 5/60
	I0919 17:40:17.366978   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 6/60
	I0919 17:40:18.368225   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 7/60
	I0919 17:40:19.369510   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 8/60
	I0919 17:40:20.370801   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 9/60
	I0919 17:40:21.372832   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 10/60
	I0919 17:40:22.374571   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 11/60
	I0919 17:40:23.375938   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 12/60
	I0919 17:40:24.377183   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 13/60
	I0919 17:40:25.378413   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 14/60
	I0919 17:40:26.380246   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 15/60
	I0919 17:40:27.381583   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 16/60
	I0919 17:40:28.382967   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 17/60
	I0919 17:40:29.384235   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 18/60
	I0919 17:40:30.385703   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 19/60
	I0919 17:40:31.387750   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 20/60
	I0919 17:40:32.389118   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 21/60
	I0919 17:40:33.391184   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 22/60
	I0919 17:40:34.392618   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 23/60
	I0919 17:40:35.395263   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 24/60
	I0919 17:40:36.397412   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 25/60
	I0919 17:40:37.399125   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 26/60
	I0919 17:40:38.401614   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 27/60
	I0919 17:40:39.402898   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 28/60
	I0919 17:40:40.404843   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 29/60
	I0919 17:40:41.406868   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 30/60
	I0919 17:40:42.408137   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 31/60
	I0919 17:40:43.409383   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 32/60
	I0919 17:40:44.411206   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 33/60
	I0919 17:40:45.412631   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 34/60
	I0919 17:40:46.414376   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 35/60
	I0919 17:40:47.415868   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 36/60
	I0919 17:40:48.417345   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 37/60
	I0919 17:40:49.418726   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 38/60
	I0919 17:40:50.420217   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 39/60
	I0919 17:40:51.422064   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 40/60
	I0919 17:40:52.423965   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 41/60
	I0919 17:40:53.425565   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 42/60
	I0919 17:40:54.426969   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 43/60
	I0919 17:40:55.428325   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 44/60
	I0919 17:40:56.430349   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 45/60
	I0919 17:40:57.431517   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 46/60
	I0919 17:40:58.432837   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 47/60
	I0919 17:40:59.435012   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 48/60
	I0919 17:41:00.436321   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 49/60
	I0919 17:41:01.438751   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 50/60
	I0919 17:41:02.440618   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 51/60
	I0919 17:41:03.442115   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 52/60
	I0919 17:41:04.443498   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 53/60
	I0919 17:41:05.444766   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 54/60
	I0919 17:41:06.446704   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 55/60
	I0919 17:41:07.447858   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 56/60
	I0919 17:41:08.449322   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 57/60
	I0919 17:41:09.450878   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 58/60
	I0919 17:41:10.453291   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 59/60
	I0919 17:41:11.454719   45083 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:41:11.454759   45083 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:41:11.454775   45083 retry.go:31] will retry after 1.125302678s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:41:12.581023   45083 stop.go:39] StopHost: no-preload-215748
	I0919 17:41:12.581401   45083 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:41:12.581453   45083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:41:12.595582   45083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
	I0919 17:41:12.595979   45083 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:41:12.596467   45083 main.go:141] libmachine: Using API Version  1
	I0919 17:41:12.596504   45083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:41:12.596857   45083 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:41:12.600180   45083 out.go:177] * Stopping node "no-preload-215748"  ...
	I0919 17:41:12.601593   45083 main.go:141] libmachine: Stopping "no-preload-215748"...
	I0919 17:41:12.601611   45083 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:41:12.603310   45083 main.go:141] libmachine: (no-preload-215748) Calling .Stop
	I0919 17:41:12.606731   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 0/60
	I0919 17:41:13.608726   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 1/60
	I0919 17:41:14.610647   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 2/60
	I0919 17:41:15.612793   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 3/60
	I0919 17:41:16.614836   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 4/60
	I0919 17:41:17.616297   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 5/60
	I0919 17:41:18.618335   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 6/60
	I0919 17:41:19.619681   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 7/60
	I0919 17:41:20.620992   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 8/60
	I0919 17:41:21.622466   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 9/60
	I0919 17:41:22.624183   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 10/60
	I0919 17:41:23.625453   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 11/60
	I0919 17:41:24.626811   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 12/60
	I0919 17:41:25.627919   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 13/60
	I0919 17:41:26.629101   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 14/60
	I0919 17:41:27.630554   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 15/60
	I0919 17:41:28.631728   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 16/60
	I0919 17:41:29.632853   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 17/60
	I0919 17:41:30.634554   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 18/60
	I0919 17:41:31.635837   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 19/60
	I0919 17:41:32.637327   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 20/60
	I0919 17:41:33.638850   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 21/60
	I0919 17:41:34.639920   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 22/60
	I0919 17:41:35.641016   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 23/60
	I0919 17:41:36.642685   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 24/60
	I0919 17:41:37.643980   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 25/60
	I0919 17:41:38.645253   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 26/60
	I0919 17:41:39.646832   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 27/60
	I0919 17:41:40.648301   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 28/60
	I0919 17:41:41.649656   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 29/60
	I0919 17:41:42.651511   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 30/60
	I0919 17:41:43.653566   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 31/60
	I0919 17:41:44.654954   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 32/60
	I0919 17:41:45.656237   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 33/60
	I0919 17:41:46.657558   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 34/60
	I0919 17:41:47.659241   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 35/60
	I0919 17:41:48.660535   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 36/60
	I0919 17:41:49.662948   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 37/60
	I0919 17:41:50.664496   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 38/60
	I0919 17:41:51.665753   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 39/60
	I0919 17:41:52.667403   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 40/60
	I0919 17:41:53.668710   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 41/60
	I0919 17:41:54.669839   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 42/60
	I0919 17:41:55.671152   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 43/60
	I0919 17:41:56.672318   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 44/60
	I0919 17:41:57.673946   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 45/60
	I0919 17:41:58.675162   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 46/60
	I0919 17:41:59.676314   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 47/60
	I0919 17:42:00.677642   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 48/60
	I0919 17:42:01.678998   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 49/60
	I0919 17:42:02.681082   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 50/60
	I0919 17:42:03.682883   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 51/60
	I0919 17:42:04.684302   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 52/60
	I0919 17:42:05.685788   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 53/60
	I0919 17:42:06.687100   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 54/60
	I0919 17:42:07.688674   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 55/60
	I0919 17:42:08.690036   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 56/60
	I0919 17:42:09.691408   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 57/60
	I0919 17:42:10.693011   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 58/60
	I0919 17:42:11.694376   45083 main.go:141] libmachine: (no-preload-215748) Waiting for machine to stop 59/60
	I0919 17:42:12.695315   45083 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:42:12.695361   45083 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:42:12.697355   45083 out.go:177] 
	W0919 17:42:12.698674   45083 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0919 17:42:12.698689   45083 out.go:239] * 
	* 
	W0919 17:42:12.700920   45083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:42:12.702265   45083 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-215748 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748: exit status 3 (18.64418577s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:42:31.348800   45737 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	E0919 17:42:31.348818   45737 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-215748" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-415555 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-415555 --alsologtostderr -v=3: exit status 82 (2m1.158089223s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-415555"  ...
	* Stopping node "default-k8s-diff-port-415555"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:41:21.182848   45455 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:41:21.182957   45455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:41:21.182966   45455 out.go:309] Setting ErrFile to fd 2...
	I0919 17:41:21.182971   45455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:41:21.183136   45455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:41:21.183351   45455 out.go:303] Setting JSON to false
	I0919 17:41:21.183423   45455 mustload.go:65] Loading cluster: default-k8s-diff-port-415555
	I0919 17:41:21.183725   45455 config.go:182] Loaded profile config "default-k8s-diff-port-415555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:41:21.183785   45455 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/config.json ...
	I0919 17:41:21.183944   45455 mustload.go:65] Loading cluster: default-k8s-diff-port-415555
	I0919 17:41:21.184042   45455 config.go:182] Loaded profile config "default-k8s-diff-port-415555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:41:21.184063   45455 stop.go:39] StopHost: default-k8s-diff-port-415555
	I0919 17:41:21.184398   45455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:41:21.184483   45455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:41:21.198779   45455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0919 17:41:21.199252   45455 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:41:21.199852   45455 main.go:141] libmachine: Using API Version  1
	I0919 17:41:21.199879   45455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:41:21.200182   45455 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:41:21.202717   45455 out.go:177] * Stopping node "default-k8s-diff-port-415555"  ...
	I0919 17:41:21.204184   45455 main.go:141] libmachine: Stopping "default-k8s-diff-port-415555"...
	I0919 17:41:21.204201   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Calling .GetState
	I0919 17:41:21.205901   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Calling .Stop
	I0919 17:41:21.209156   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 0/60
	I0919 17:41:22.211105   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 1/60
	I0919 17:41:23.212672   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 2/60
	I0919 17:41:24.214020   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 3/60
	I0919 17:41:25.215436   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 4/60
	I0919 17:41:26.217347   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 5/60
	I0919 17:41:27.218828   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 6/60
	I0919 17:41:28.220214   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 7/60
	I0919 17:41:29.221548   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 8/60
	I0919 17:41:30.222943   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 9/60
	I0919 17:41:31.224674   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 10/60
	I0919 17:41:32.226954   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 11/60
	I0919 17:41:33.228254   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 12/60
	I0919 17:41:34.229561   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 13/60
	I0919 17:41:35.231106   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 14/60
	I0919 17:41:36.233040   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 15/60
	I0919 17:41:37.234783   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 16/60
	I0919 17:41:38.236014   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 17/60
	I0919 17:41:39.237372   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 18/60
	I0919 17:41:40.238624   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 19/60
	I0919 17:41:41.240814   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 20/60
	I0919 17:41:42.242077   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 21/60
	I0919 17:41:43.243338   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 22/60
	I0919 17:41:44.245073   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 23/60
	I0919 17:41:45.246579   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 24/60
	I0919 17:41:46.248650   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 25/60
	I0919 17:41:47.250314   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 26/60
	I0919 17:41:48.251562   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 27/60
	I0919 17:41:49.253000   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 28/60
	I0919 17:41:50.254392   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 29/60
	I0919 17:41:51.256602   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 30/60
	I0919 17:41:52.257854   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 31/60
	I0919 17:41:53.259290   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 32/60
	I0919 17:41:54.260533   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 33/60
	I0919 17:41:55.262841   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 34/60
	I0919 17:41:56.264731   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 35/60
	I0919 17:41:57.266110   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 36/60
	I0919 17:41:58.267386   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 37/60
	I0919 17:41:59.268891   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 38/60
	I0919 17:42:00.270548   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 39/60
	I0919 17:42:01.272918   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 40/60
	I0919 17:42:02.274211   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 41/60
	I0919 17:42:03.275790   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 42/60
	I0919 17:42:04.277030   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 43/60
	I0919 17:42:05.278585   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 44/60
	I0919 17:42:06.280533   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 45/60
	I0919 17:42:07.281943   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 46/60
	I0919 17:42:08.283495   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 47/60
	I0919 17:42:09.284878   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 48/60
	I0919 17:42:10.286932   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 49/60
	I0919 17:42:11.289118   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 50/60
	I0919 17:42:12.290458   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 51/60
	I0919 17:42:13.292630   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 52/60
	I0919 17:42:14.294881   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 53/60
	I0919 17:42:15.296428   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 54/60
	I0919 17:42:16.298334   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 55/60
	I0919 17:42:17.299653   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 56/60
	I0919 17:42:18.301088   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 57/60
	I0919 17:42:19.302430   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 58/60
	I0919 17:42:20.303989   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 59/60
	I0919 17:42:21.305448   45455 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:42:21.305507   45455 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:42:21.305530   45455 retry.go:31] will retry after 875.130415ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:42:22.181528   45455 stop.go:39] StopHost: default-k8s-diff-port-415555
	I0919 17:42:22.182021   45455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:42:22.182076   45455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:42:22.196228   45455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0919 17:42:22.196737   45455 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:42:22.197270   45455 main.go:141] libmachine: Using API Version  1
	I0919 17:42:22.197312   45455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:42:22.197649   45455 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:42:22.199660   45455 out.go:177] * Stopping node "default-k8s-diff-port-415555"  ...
	I0919 17:42:22.201038   45455 main.go:141] libmachine: Stopping "default-k8s-diff-port-415555"...
	I0919 17:42:22.201055   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Calling .GetState
	I0919 17:42:22.202615   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Calling .Stop
	I0919 17:42:22.206137   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 0/60
	I0919 17:42:23.207600   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 1/60
	I0919 17:42:24.208956   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 2/60
	I0919 17:42:25.210806   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 3/60
	I0919 17:42:26.212169   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 4/60
	I0919 17:42:27.214218   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 5/60
	I0919 17:42:28.215511   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 6/60
	I0919 17:42:29.217225   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 7/60
	I0919 17:42:30.218805   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 8/60
	I0919 17:42:31.220211   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 9/60
	I0919 17:42:32.222163   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 10/60
	I0919 17:42:33.223401   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 11/60
	I0919 17:42:34.224807   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 12/60
	I0919 17:42:35.226098   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 13/60
	I0919 17:42:36.227240   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 14/60
	I0919 17:42:37.228894   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 15/60
	I0919 17:42:38.230192   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 16/60
	I0919 17:42:39.231686   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 17/60
	I0919 17:42:40.233171   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 18/60
	I0919 17:42:41.234591   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 19/60
	I0919 17:42:42.236211   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 20/60
	I0919 17:42:43.237719   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 21/60
	I0919 17:42:44.239572   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 22/60
	I0919 17:42:45.240983   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 23/60
	I0919 17:42:46.242263   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 24/60
	I0919 17:42:47.244319   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 25/60
	I0919 17:42:48.246683   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 26/60
	I0919 17:42:49.248549   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 27/60
	I0919 17:42:50.250019   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 28/60
	I0919 17:42:51.251512   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 29/60
	I0919 17:42:52.253298   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 30/60
	I0919 17:42:53.254639   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 31/60
	I0919 17:42:54.255991   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 32/60
	I0919 17:42:55.257314   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 33/60
	I0919 17:42:56.258558   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 34/60
	I0919 17:42:57.260227   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 35/60
	I0919 17:42:58.261668   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 36/60
	I0919 17:42:59.262987   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 37/60
	I0919 17:43:00.264363   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 38/60
	I0919 17:43:01.265702   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 39/60
	I0919 17:43:02.267288   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 40/60
	I0919 17:43:03.268767   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 41/60
	I0919 17:43:04.270830   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 42/60
	I0919 17:43:05.272109   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 43/60
	I0919 17:43:06.274030   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 44/60
	I0919 17:43:07.276111   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 45/60
	I0919 17:43:08.277453   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 46/60
	I0919 17:43:09.278964   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 47/60
	I0919 17:43:10.280266   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 48/60
	I0919 17:43:11.281900   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 49/60
	I0919 17:43:12.283645   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 50/60
	I0919 17:43:13.285460   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 51/60
	I0919 17:43:14.287010   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 52/60
	I0919 17:43:15.289037   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 53/60
	I0919 17:43:16.290412   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 54/60
	I0919 17:43:17.292320   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 55/60
	I0919 17:43:18.293708   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 56/60
	I0919 17:43:19.295048   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 57/60
	I0919 17:43:20.296336   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 58/60
	I0919 17:43:21.297724   45455 main.go:141] libmachine: (default-k8s-diff-port-415555) Waiting for machine to stop 59/60
	I0919 17:43:22.298601   45455 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:43:22.298642   45455 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:43:22.300612   45455 out.go:177] 
	W0919 17:43:22.302365   45455 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0919 17:43:22.302381   45455 out.go:239] * 
	* 
	W0919 17:43:22.304772   45455 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:43:22.306097   45455 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-415555 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555: exit status 3 (18.417284231s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:43:40.724689   46106 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	E0919 17:43:40.724709   46106 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415555" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155: exit status 3 (3.167555524s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:42:01.236783   45585 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host
	E0919 17:42:01.236809   45585 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-415155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-415155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152381322s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-415155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155: exit status 3 (3.064528425s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:42:10.452782   45655 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host
	E0919 17:42:10.452807   45655 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.6:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-415155" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748: exit status 3 (3.167630461s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:42:34.516728   45827 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	E0919 17:42:34.516749   45827 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-215748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0919 17:42:39.334059   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-215748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154852009s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-215748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748: exit status 3 (3.061331565s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:42:43.732786   45915 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host
	E0919 17:42:43.732808   45915 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.15:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-215748" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555: exit status 3 (3.171359615s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:43:43.896751   46182 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	E0919 17:43:43.896782   46182 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-415555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-415555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149007686s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-415555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555: exit status 3 (3.062824581s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:43:53.108794   46252 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	E0919 17:43:53.108815   46252 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415555" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-100627 --alsologtostderr -v=3
E0919 17:47:56.282181   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:48:21.263753   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-100627 --alsologtostderr -v=3: exit status 82 (2m0.895148019s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-100627"  ...
	* Stopping node "old-k8s-version-100627"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:46:53.903513   46964 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:46:53.903816   46964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:46:53.903827   46964 out.go:309] Setting ErrFile to fd 2...
	I0919 17:46:53.903834   46964 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:46:53.904136   46964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:46:53.904435   46964 out.go:303] Setting JSON to false
	I0919 17:46:53.904551   46964 mustload.go:65] Loading cluster: old-k8s-version-100627
	I0919 17:46:53.905011   46964 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:46:53.905108   46964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:46:53.905332   46964 mustload.go:65] Loading cluster: old-k8s-version-100627
	I0919 17:46:53.905488   46964 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:46:53.905526   46964 stop.go:39] StopHost: old-k8s-version-100627
	I0919 17:46:53.906068   46964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:46:53.906128   46964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:46:53.920436   46964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38005
	I0919 17:46:53.920904   46964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:46:53.921598   46964 main.go:141] libmachine: Using API Version  1
	I0919 17:46:53.921627   46964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:46:53.921950   46964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:46:53.924690   46964 out.go:177] * Stopping node "old-k8s-version-100627"  ...
	I0919 17:46:53.926150   46964 main.go:141] libmachine: Stopping "old-k8s-version-100627"...
	I0919 17:46:53.926171   46964 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:46:53.927912   46964 main.go:141] libmachine: (old-k8s-version-100627) Calling .Stop
	I0919 17:46:53.931515   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 0/60
	I0919 17:46:54.932865   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 1/60
	I0919 17:46:55.934372   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 2/60
	I0919 17:46:56.935973   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 3/60
	I0919 17:46:57.937515   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 4/60
	I0919 17:46:58.939439   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 5/60
	I0919 17:46:59.941306   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 6/60
	I0919 17:47:00.943117   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 7/60
	I0919 17:47:01.944350   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 8/60
	I0919 17:47:02.945515   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 9/60
	I0919 17:47:03.947579   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 10/60
	I0919 17:47:04.948811   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 11/60
	I0919 17:47:05.950891   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 12/60
	I0919 17:47:06.952147   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 13/60
	I0919 17:47:07.954069   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 14/60
	I0919 17:47:08.956360   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 15/60
	I0919 17:47:09.957777   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 16/60
	I0919 17:47:10.959561   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 17/60
	I0919 17:47:11.960773   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 18/60
	I0919 17:47:12.962180   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 19/60
	I0919 17:47:13.963699   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 20/60
	I0919 17:47:14.965032   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 21/60
	I0919 17:47:15.967213   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 22/60
	I0919 17:47:16.968708   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 23/60
	I0919 17:47:17.971078   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 24/60
	I0919 17:47:18.973466   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 25/60
	I0919 17:47:19.975061   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 26/60
	I0919 17:47:20.976558   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 27/60
	I0919 17:47:21.978087   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 28/60
	I0919 17:47:22.979577   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 29/60
	I0919 17:47:23.981954   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 30/60
	I0919 17:47:24.983125   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 31/60
	I0919 17:47:25.985244   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 32/60
	I0919 17:47:26.986933   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 33/60
	I0919 17:47:27.988534   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 34/60
	I0919 17:47:28.990384   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 35/60
	I0919 17:47:29.992941   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 36/60
	I0919 17:47:30.994554   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 37/60
	I0919 17:47:31.996142   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 38/60
	I0919 17:47:32.997538   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 39/60
	I0919 17:47:33.999844   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 40/60
	I0919 17:47:35.001052   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 41/60
	I0919 17:47:36.003029   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 42/60
	I0919 17:47:37.004382   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 43/60
	I0919 17:47:38.006077   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 44/60
	I0919 17:47:39.007823   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 45/60
	I0919 17:47:40.010135   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 46/60
	I0919 17:47:41.011421   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 47/60
	I0919 17:47:42.012835   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 48/60
	I0919 17:47:43.014208   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 49/60
	I0919 17:47:44.016814   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 50/60
	I0919 17:47:45.018423   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 51/60
	I0919 17:47:46.019773   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 52/60
	I0919 17:47:47.021836   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 53/60
	I0919 17:47:48.023973   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 54/60
	I0919 17:47:49.026252   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 55/60
	I0919 17:47:50.027928   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 56/60
	I0919 17:47:51.029691   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 57/60
	I0919 17:47:52.031514   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 58/60
	I0919 17:47:53.033236   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 59/60
	I0919 17:47:54.033649   46964 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:47:54.033708   46964 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:47:54.033728   46964 retry.go:31] will retry after 577.949837ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:47:54.612605   46964 stop.go:39] StopHost: old-k8s-version-100627
	I0919 17:47:54.613090   46964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:47:54.613169   46964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:47:54.632296   46964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0919 17:47:54.632929   46964 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:47:54.633443   46964 main.go:141] libmachine: Using API Version  1
	I0919 17:47:54.633474   46964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:47:54.633939   46964 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:47:54.636151   46964 out.go:177] * Stopping node "old-k8s-version-100627"  ...
	I0919 17:47:54.637872   46964 main.go:141] libmachine: Stopping "old-k8s-version-100627"...
	I0919 17:47:54.637891   46964 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:47:54.639845   46964 main.go:141] libmachine: (old-k8s-version-100627) Calling .Stop
	I0919 17:47:54.643815   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 0/60
	I0919 17:47:55.645295   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 1/60
	I0919 17:47:56.646892   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 2/60
	I0919 17:47:57.648202   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 3/60
	I0919 17:47:58.649446   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 4/60
	I0919 17:47:59.651424   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 5/60
	I0919 17:48:00.652994   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 6/60
	I0919 17:48:01.654971   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 7/60
	I0919 17:48:02.656105   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 8/60
	I0919 17:48:03.657969   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 9/60
	I0919 17:48:04.659570   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 10/60
	I0919 17:48:05.661266   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 11/60
	I0919 17:48:06.662923   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 12/60
	I0919 17:48:07.664619   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 13/60
	I0919 17:48:08.666963   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 14/60
	I0919 17:48:09.669174   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 15/60
	I0919 17:48:10.671040   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 16/60
	I0919 17:48:11.672553   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 17/60
	I0919 17:48:12.673993   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 18/60
	I0919 17:48:13.676057   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 19/60
	I0919 17:48:14.678389   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 20/60
	I0919 17:48:15.680046   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 21/60
	I0919 17:48:16.681248   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 22/60
	I0919 17:48:17.682855   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 23/60
	I0919 17:48:18.685087   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 24/60
	I0919 17:48:19.686466   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 25/60
	I0919 17:48:20.687839   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 26/60
	I0919 17:48:21.689597   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 27/60
	I0919 17:48:22.690917   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 28/60
	I0919 17:48:23.692489   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 29/60
	I0919 17:48:24.693949   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 30/60
	I0919 17:48:25.695347   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 31/60
	I0919 17:48:26.696850   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 32/60
	I0919 17:48:27.698114   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 33/60
	I0919 17:48:28.699364   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 34/60
	I0919 17:48:29.701042   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 35/60
	I0919 17:48:30.702994   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 36/60
	I0919 17:48:31.704471   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 37/60
	I0919 17:48:32.705789   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 38/60
	I0919 17:48:33.707329   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 39/60
	I0919 17:48:34.709144   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 40/60
	I0919 17:48:35.710495   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 41/60
	I0919 17:48:36.712380   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 42/60
	I0919 17:48:37.713799   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 43/60
	I0919 17:48:38.716133   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 44/60
	I0919 17:48:39.718452   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 45/60
	I0919 17:48:40.719716   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 46/60
	I0919 17:48:41.721367   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 47/60
	I0919 17:48:42.723597   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 48/60
	I0919 17:48:43.724930   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 49/60
	I0919 17:48:44.726572   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 50/60
	I0919 17:48:45.728137   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 51/60
	I0919 17:48:46.729368   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 52/60
	I0919 17:48:47.730699   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 53/60
	I0919 17:48:48.732498   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 54/60
	I0919 17:48:49.734165   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 55/60
	I0919 17:48:50.736216   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 56/60
	I0919 17:48:51.737494   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 57/60
	I0919 17:48:52.739080   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 58/60
	I0919 17:48:53.740295   46964 main.go:141] libmachine: (old-k8s-version-100627) Waiting for machine to stop 59/60
	I0919 17:48:54.741377   46964 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0919 17:48:54.741420   46964 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0919 17:48:54.743373   46964 out.go:177] 
	W0919 17:48:54.744805   46964 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0919 17:48:54.744832   46964 out.go:239] * 
	* 
	W0919 17:48:54.747139   46964 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 17:48:54.748658   46964 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-100627 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627: exit status 3 (18.517944979s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:49:13.268769   47618 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E0919 17:49:13.268791   47618 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-100627" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627: exit status 3 (3.171690551s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:49:16.440719   47689 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E0919 17:49:16.440746   47689 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-100627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-100627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.150071776s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-100627 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627: exit status 3 (3.061575671s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 17:49:25.652810   47758 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E0919 17:49:25.652834   47758 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-100627" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:01:20.837181959 +0000 UTC m=+5213.769165997
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-415555 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-415555 logs -n 25: (1.375363216s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-512928 -- sudo                         | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-512928                                 | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-367630                            | force-systemd-env-367630     | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC | 19 Sep 23 17:52 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100627        | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC | 19 Sep 23 17:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100627             | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:49:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:49:25.690379   47798 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:49:25.690666   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690680   47798 out.go:309] Setting ErrFile to fd 2...
	I0919 17:49:25.690688   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690866   47798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:49:25.691435   47798 out.go:303] Setting JSON to false
	I0919 17:49:25.692368   47798 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5516,"bootTime":1695140250,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:49:25.692468   47798 start.go:138] virtualization: kvm guest
	I0919 17:49:25.694628   47798 out.go:177] * [old-k8s-version-100627] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:49:25.696349   47798 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:49:25.696345   47798 notify.go:220] Checking for updates...
	I0919 17:49:25.697700   47798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:49:25.699081   47798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:49:25.700392   47798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:49:25.701684   47798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:49:25.704016   47798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:49:25.705911   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:49:25.706464   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.706525   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.722505   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0919 17:49:25.722936   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.723454   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.723479   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.723851   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.724042   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.726028   47798 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:49:25.727479   47798 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:49:25.727787   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.727829   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.743272   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0919 17:49:25.743700   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.744180   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.744206   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.744589   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.744775   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.781696   47798 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:49:25.783056   47798 start.go:298] selected driver: kvm2
	I0919 17:49:25.783069   47798 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.783172   47798 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:49:25.783797   47798 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.783868   47798 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:49:25.797796   47798 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:49:25.798190   47798 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:49:25.798229   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:49:25.798239   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:49:25.798254   47798 start_flags.go:321] config:
	{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.798391   47798 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.800110   47798 out.go:177] * Starting control plane node old-k8s-version-100627 in cluster old-k8s-version-100627
	I0919 17:49:25.801393   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:49:25.801433   47798 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 17:49:25.801447   47798 cache.go:57] Caching tarball of preloaded images
	I0919 17:49:25.801545   47798 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:49:25.801559   47798 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0919 17:49:25.801689   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:49:25.801924   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:49:25.801971   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 26.483µs
	I0919 17:49:25.801985   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:49:25.801989   47798 fix.go:54] fixHost starting: 
	I0919 17:49:25.802270   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.802300   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.816968   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0919 17:49:25.817484   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.818034   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.818069   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.818376   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.818564   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.818799   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:49:25.820610   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Running err=<nil>
	W0919 17:49:25.820646   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:49:25.822656   47798 out.go:177] * Updating the running kvm2 "old-k8s-version-100627" VM ...
	I0919 17:49:25.475965   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:27.476794   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:24.179260   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.686283   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.993419   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:28.995394   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:25.824024   47798 machine.go:88] provisioning docker machine ...
	I0919 17:49:25.824053   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.824279   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824480   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:49:25.824508   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824671   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:49:25.827416   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.827890   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:49:25.827920   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.828092   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:49:25.828287   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828490   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828642   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:49:25.828819   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:49:25.829172   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:49:25.829188   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:49:28.724736   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:29.976563   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.976829   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:29.180775   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.677584   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.678666   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.493348   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.495016   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.796651   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:33.977341   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.477521   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.178183   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:38.679802   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:35.495920   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.993770   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:39.994165   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.876662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:38.477642   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.977376   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:41.177699   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:43.178895   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:42.494311   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:44.494974   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.948690   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:43.476725   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.477936   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.977074   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.678443   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:48.178687   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:46.994529   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:49.494895   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.028682   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.100607   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.476569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.478246   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:50.179250   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.180827   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:51.994091   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.494911   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.480792   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.978326   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.678236   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.678493   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.678539   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.496729   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.993989   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:59.224657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:59.476603   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.477023   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:00.678913   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.178281   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.494409   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.993808   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:02.292662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:03.477796   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.976205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.180836   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.678312   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.994188   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.999270   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:08.372675   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:08.476522   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.976260   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:09.679568   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.179377   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.494291   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.995682   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:11.444679   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:13.476906   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.478193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.976583   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:14.679325   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:16.690040   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.496998   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.993599   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.993922   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.524614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.596688   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.476110   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.477330   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.184902   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:21.678830   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:23.679261   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.494626   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.993912   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.976379   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.976627   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.177309   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:28.179300   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:27.494133   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:29.494473   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.676677   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:29.748706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:28.976722   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.980716   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.678715   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.177789   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:31.993563   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.995728   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.476205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.975739   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.978115   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.178188   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.178328   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:36.493541   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:38.494380   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.832612   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:38.900652   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:40.476580   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.476989   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:39.180279   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:41.678338   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:43.678611   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:40.993785   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.994446   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:44.980626   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:44.976641   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.977032   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.178379   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.179405   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:45.494929   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:47.993704   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:49.995192   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.052702   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:48.977244   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:51.477325   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:50.678663   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:53.178707   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:52.493646   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.494478   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.132706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:53.477737   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.977429   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.978145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.678855   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.177724   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:56.993145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.994370   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.208643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:00.476193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.476286   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:00.178398   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.677951   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:01.501993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.993491   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.288721   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:04.476795   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.976387   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.177376   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:07.178224   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.995006   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:08.494405   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.360657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:08.977404   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.475407   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:09.178322   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.179143   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:13.180235   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:10.494521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.993993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.436681   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:15.508678   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:13.975736   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.977800   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.679181   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.177065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.494642   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:17.494846   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:19.993481   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.475821   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.476773   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.976145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.178065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.178249   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.993613   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:23.994655   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.588622   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.660703   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.976569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:27.476021   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:24.678762   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.682314   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.493981   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:28.494262   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.477183   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.976125   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.178390   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.178551   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:33.678277   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.495041   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:32.993120   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.740717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.816640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.977079   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.475678   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.179024   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:38.678508   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:35.495368   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:37.994521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:39.892631   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:38.476601   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.978279   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:41.178365   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:43.678896   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.493826   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.992893   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:44.993574   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.968646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:43.478156   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:45.976257   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.977272   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:46.178127   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:48.178192   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.494860   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.993714   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.044674   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:50.476391   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.976686   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:50.678434   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:53.177908   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:51.995140   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:54.494996   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.116699   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:54.977835   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.475875   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:55.178219   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.179598   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:56.992881   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.994100   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.200619   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:59.476340   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.975559   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:59.678336   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:00.158668   45961 pod_ready.go:81] duration metric: took 4m0.000408372s waiting for pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:00.158710   45961 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:00.158733   45961 pod_ready.go:38] duration metric: took 4m12.69690087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:00.158768   45961 kubeadm.go:640] restartCluster took 4m32.67884897s
	W0919 17:52:00.158862   45961 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:00.158899   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:00.995208   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:03.493604   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.272609   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:03.976776   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:06.478653   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:05.495181   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.995025   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.348614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:10.424641   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:08.170853   46282 pod_ready.go:81] duration metric: took 4m0.00010513s waiting for pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:08.170890   46282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:08.170903   46282 pod_ready.go:38] duration metric: took 4m5.202195097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:08.170929   46282 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:08.170960   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:08.171010   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:08.229465   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.229484   46282 cri.go:89] found id: ""
	I0919 17:52:08.229491   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:08.229537   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.234379   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:08.234434   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:08.280999   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:08.281033   46282 cri.go:89] found id: ""
	I0919 17:52:08.281044   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:08.281097   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.285499   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:08.285561   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:08.327387   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.327413   46282 cri.go:89] found id: ""
	I0919 17:52:08.327423   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:08.327481   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.333158   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:08.333235   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:08.375921   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.375946   46282 cri.go:89] found id: ""
	I0919 17:52:08.375955   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:08.376008   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.380156   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:08.380220   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:08.425586   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:08.425613   46282 cri.go:89] found id: ""
	I0919 17:52:08.425620   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:08.425676   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.430229   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:08.430302   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:08.482920   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:08.482946   46282 cri.go:89] found id: ""
	I0919 17:52:08.482956   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:08.483017   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.488497   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:08.488559   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:08.543405   46282 cri.go:89] found id: ""
	I0919 17:52:08.543432   46282 logs.go:284] 0 containers: []
	W0919 17:52:08.543441   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:08.543449   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:08.543510   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:08.588287   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:08.588309   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:08.588314   46282 cri.go:89] found id: ""
	I0919 17:52:08.588326   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:08.588390   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.592986   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.597223   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:08.597245   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:08.648372   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:08.648400   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:08.705158   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:08.705203   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.754475   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:08.754511   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.797571   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:08.797603   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:08.950578   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:08.950617   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.998529   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:08.998555   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:09.039415   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:09.039445   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:09.081622   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:09.081657   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:09.095239   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:09.095269   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:09.141402   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:09.141429   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:09.186918   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:09.186953   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:09.244473   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:09.244508   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:12.216337   46282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:12.232741   46282 api_server.go:72] duration metric: took 4m15.890515742s to wait for apiserver process to appear ...
	I0919 17:52:12.232764   46282 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:12.232793   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:12.232844   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:12.279741   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:12.279769   46282 cri.go:89] found id: ""
	I0919 17:52:12.279780   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:12.279836   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.284490   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:12.284560   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:12.322547   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:12.322575   46282 cri.go:89] found id: ""
	I0919 17:52:12.322585   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:12.322648   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.326924   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:12.326981   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:12.376181   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:12.376201   46282 cri.go:89] found id: ""
	I0919 17:52:12.376208   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:12.376259   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.380831   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:12.380892   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:12.422001   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.422035   46282 cri.go:89] found id: ""
	I0919 17:52:12.422045   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:12.422112   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.426372   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:12.426456   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:12.474718   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:12.474739   46282 cri.go:89] found id: ""
	I0919 17:52:12.474749   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:12.474804   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.479781   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:12.479837   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:12.525008   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:12.525038   46282 cri.go:89] found id: ""
	I0919 17:52:12.525047   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:12.525106   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.529414   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:12.529480   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:12.573369   46282 cri.go:89] found id: ""
	I0919 17:52:12.573395   46282 logs.go:284] 0 containers: []
	W0919 17:52:12.573403   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:12.573410   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:12.573461   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:12.618041   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:12.618063   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:12.618067   46282 cri.go:89] found id: ""
	I0919 17:52:12.618074   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:12.618118   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.622248   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.626519   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:12.626537   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.667023   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:12.667052   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:13.123963   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:13.123996   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:10.495145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:12.994448   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:13.243498   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:13.243533   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:13.289172   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:13.289208   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:13.325853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:13.325883   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:13.363915   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:13.363943   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.412359   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:13.412394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:13.458675   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:13.458706   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:13.473516   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:13.473549   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:13.538694   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:13.538723   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:13.606826   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:13.606871   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:13.652363   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:13.652394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.204482   46282 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8444/healthz ...
	I0919 17:52:16.210733   46282 api_server.go:279] https://192.168.61.228:8444/healthz returned 200:
	ok
	I0919 17:52:16.212054   46282 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:16.212076   46282 api_server.go:131] duration metric: took 3.979306376s to wait for apiserver health ...
	I0919 17:52:16.212085   46282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:16.212106   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:16.212148   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:16.263882   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:16.263908   46282 cri.go:89] found id: ""
	I0919 17:52:16.263918   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:16.263978   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.268238   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:16.268291   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:16.309480   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.309504   46282 cri.go:89] found id: ""
	I0919 17:52:16.309511   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:16.309560   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.313860   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:16.313910   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:16.353715   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:16.353741   46282 cri.go:89] found id: ""
	I0919 17:52:16.353751   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:16.353812   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.358128   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:16.358194   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:16.398792   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.398811   46282 cri.go:89] found id: ""
	I0919 17:52:16.398818   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:16.398865   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.403410   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:16.403463   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:16.449884   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.449910   46282 cri.go:89] found id: ""
	I0919 17:52:16.449924   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:16.449966   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.454404   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:16.454462   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:16.500246   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:16.500265   46282 cri.go:89] found id: ""
	I0919 17:52:16.500274   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:16.500328   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.504468   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:16.504531   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:16.545865   46282 cri.go:89] found id: ""
	I0919 17:52:16.545888   46282 logs.go:284] 0 containers: []
	W0919 17:52:16.545895   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:16.545900   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:16.545953   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:16.584533   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.584560   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.584565   46282 cri.go:89] found id: ""
	I0919 17:52:16.584571   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:16.584619   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.588723   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.592429   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:16.592459   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:16.643853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:16.643884   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.693660   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:16.693697   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:16.710833   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:16.710860   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.769518   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:16.769548   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.819614   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:16.819645   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.860112   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:16.860154   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:16.918657   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:16.918687   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.962381   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:16.962412   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:17.304580   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:17.304618   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:17.449337   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:17.449368   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:17.522234   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:17.522268   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:17.581061   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:17.581093   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.986517   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.82758933s)
	I0919 17:52:13.986593   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:14.002396   45961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:14.012005   45961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:14.020952   45961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:14.021075   45961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:14.249350   45961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:20.161795   46282 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:20.161825   46282 system_pods.go:61] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.161833   46282 system_pods.go:61] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.161840   46282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.161845   46282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.161850   46282 system_pods.go:61] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.161856   46282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.161866   46282 system_pods.go:61] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.161876   46282 system_pods.go:61] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.161885   46282 system_pods.go:74] duration metric: took 3.949793054s to wait for pod list to return data ...
	I0919 17:52:20.161895   46282 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:20.165017   46282 default_sa.go:45] found service account: "default"
	I0919 17:52:20.165041   46282 default_sa.go:55] duration metric: took 3.138746ms for default service account to be created ...
	I0919 17:52:20.165051   46282 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:20.171771   46282 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:20.171798   46282 system_pods.go:89] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.171807   46282 system_pods.go:89] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.171815   46282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.171823   46282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.171841   46282 system_pods.go:89] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.171847   46282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.171858   46282 system_pods.go:89] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.171867   46282 system_pods.go:89] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.171879   46282 system_pods.go:126] duration metric: took 6.820805ms to wait for k8s-apps to be running ...
	I0919 17:52:20.171891   46282 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:20.171944   46282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:20.191948   46282 system_svc.go:56] duration metric: took 20.046863ms WaitForService to wait for kubelet.
	I0919 17:52:20.191977   46282 kubeadm.go:581] duration metric: took 4m23.849755591s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:20.192003   46282 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:20.198066   46282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:20.198090   46282 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:20.198101   46282 node_conditions.go:105] duration metric: took 6.093464ms to run NodePressure ...
	I0919 17:52:20.198113   46282 start.go:228] waiting for startup goroutines ...
	I0919 17:52:20.198122   46282 start.go:233] waiting for cluster config update ...
	I0919 17:52:20.198131   46282 start.go:242] writing updated cluster config ...
	I0919 17:52:20.198390   46282 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:20.260334   46282 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:20.262660   46282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-415555" cluster and "default" namespace by default
	I0919 17:52:15.493238   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:17.495147   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:19.497990   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:16.500634   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:19.572697   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.436229   45961 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:25.436332   45961 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:25.436448   45961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:25.436580   45961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:25.436693   45961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:25.436784   45961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:25.438740   45961 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:25.438831   45961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:25.438907   45961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:25.439035   45961 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:25.439117   45961 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:25.439225   45961 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:25.439306   45961 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:25.439378   45961 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:25.439455   45961 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:25.439554   45961 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:25.439646   45961 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:25.439692   45961 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:25.439759   45961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:25.439825   45961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:25.439892   45961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:25.439982   45961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:25.440068   45961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:25.440183   45961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:25.440276   45961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:25.441897   45961 out.go:204]   - Booting up control plane ...
	I0919 17:52:25.442005   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:25.442103   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:25.442163   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:25.442248   45961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:25.442343   45961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:25.442428   45961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:25.442641   45961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:25.442703   45961 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003935 seconds
	I0919 17:52:25.442819   45961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:25.442911   45961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:25.442959   45961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:25.443101   45961 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-215748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:25.443144   45961 kubeadm.go:322] [bootstrap-token] Using token: xzx8bb.31rxl0d2e5l1asvj
	I0919 17:52:25.444479   45961 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:25.444574   45961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:25.444640   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:25.444747   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:25.444886   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:25.445049   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:25.445178   45961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:25.445344   45961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:25.445403   45961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:25.445462   45961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:25.445475   45961 kubeadm.go:322] 
	I0919 17:52:25.445558   45961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:25.445569   45961 kubeadm.go:322] 
	I0919 17:52:25.445659   45961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:25.445672   45961 kubeadm.go:322] 
	I0919 17:52:25.445691   45961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:25.445740   45961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:25.445779   45961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:25.445785   45961 kubeadm.go:322] 
	I0919 17:52:25.445824   45961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:25.445830   45961 kubeadm.go:322] 
	I0919 17:52:25.445873   45961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:25.445879   45961 kubeadm.go:322] 
	I0919 17:52:25.445939   45961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:25.446038   45961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:25.446154   45961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:25.446172   45961 kubeadm.go:322] 
	I0919 17:52:25.446275   45961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:25.446361   45961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:25.446371   45961 kubeadm.go:322] 
	I0919 17:52:25.446473   45961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.446594   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:25.446623   45961 kubeadm.go:322] 	--control-plane 
	I0919 17:52:25.446641   45961 kubeadm.go:322] 
	I0919 17:52:25.446774   45961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:25.446782   45961 kubeadm.go:322] 
	I0919 17:52:25.446874   45961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.447044   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:25.447066   45961 cni.go:84] Creating CNI manager for ""
	I0919 17:52:25.447079   45961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:25.448742   45961 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:21.994034   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:24.494339   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:25.656705   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.450147   45961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:25.473476   45961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:25.529295   45961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:25.529383   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.529387   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=no-preload-215748 minikube.k8s.io/updated_at=2023_09_19T17_52_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.625308   45961 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:25.905954   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.037543   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.638479   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.138484   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.637901   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.138033   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.638787   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.494798   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:28.213192   45696 pod_ready.go:81] duration metric: took 4m0.001033854s waiting for pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:28.213226   45696 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:28.213243   45696 pod_ready.go:38] duration metric: took 4m12.067034727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:28.213266   45696 kubeadm.go:640] restartCluster took 4m32.254857032s
	W0919 17:52:28.213338   45696 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:28.213378   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:28.728646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:29.138616   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:29.638381   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.138155   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.637984   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.137977   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.638547   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.138617   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.638253   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.138335   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.638302   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.804640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:34.138702   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.638549   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.138431   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.638642   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.138000   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.638726   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.138394   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.315805   45961 kubeadm.go:1081] duration metric: took 11.786488266s to wait for elevateKubeSystemPrivileges.
	I0919 17:52:37.315840   45961 kubeadm.go:406] StartCluster complete in 5m9.899215362s
	I0919 17:52:37.315856   45961 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.315945   45961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:52:37.317563   45961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.317815   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:52:37.317844   45961 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:52:37.317936   45961 addons.go:69] Setting storage-provisioner=true in profile "no-preload-215748"
	I0919 17:52:37.317943   45961 addons.go:69] Setting default-storageclass=true in profile "no-preload-215748"
	I0919 17:52:37.317959   45961 addons.go:231] Setting addon storage-provisioner=true in "no-preload-215748"
	I0919 17:52:37.317963   45961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-215748"
	W0919 17:52:37.317967   45961 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:52:37.317964   45961 addons.go:69] Setting metrics-server=true in profile "no-preload-215748"
	I0919 17:52:37.317988   45961 addons.go:231] Setting addon metrics-server=true in "no-preload-215748"
	W0919 17:52:37.318000   45961 addons.go:240] addon metrics-server should already be in state true
	I0919 17:52:37.318016   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318041   45961 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:52:37.318051   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318380   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318407   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318416   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318429   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318475   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318495   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.334365   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0919 17:52:37.334822   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.335368   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.335395   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.335861   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.336052   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0919 17:52:37.337998   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338047   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338480   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338498   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338610   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338632   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338840   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.338941   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.339461   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339490   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.339536   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339565   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.354064   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
	I0919 17:52:37.354482   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.354893   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.354912   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.355353   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.355578   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.357181   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.359063   45961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:52:37.357674   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0919 17:52:37.358308   45961 addons.go:231] Setting addon default-storageclass=true in "no-preload-215748"
	W0919 17:52:37.360428   45961 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:52:37.360461   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.360569   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:52:37.360583   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:52:37.360602   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.360832   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.360869   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.360891   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.361393   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.361411   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.361836   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.362040   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.363959   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.364124   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.365928   45961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:52:37.364551   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.364765   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.367579   45961 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.367592   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:52:37.367609   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.367639   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.367660   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.367827   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.368140   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.370800   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371215   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.371240   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371416   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.371612   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.371777   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.371914   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.379222   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0919 17:52:37.379631   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.380097   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.380122   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.380481   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.381718   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.381754   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.396647   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0919 17:52:37.397058   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.397474   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.397492   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.397842   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.397994   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.399762   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.400224   45961 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.400239   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:52:37.400255   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.403299   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403745   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.403767   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.403773   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403948   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.404080   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.404221   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.448139   45961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-215748" context rescaled to 1 replicas
	I0919 17:52:37.448183   45961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:52:37.450076   45961 out.go:177] * Verifying Kubernetes components...
	I0919 17:52:37.451036   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:37.579553   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.592116   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.604757   45961 node_ready.go:35] waiting up to 6m0s for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.605235   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:52:37.611496   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:52:37.611523   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:52:37.625762   45961 node_ready.go:49] node "no-preload-215748" has status "Ready":"True"
	I0919 17:52:37.625782   45961 node_ready.go:38] duration metric: took 20.997061ms waiting for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.625790   45961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:37.638366   45961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.693993   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:52:37.694019   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:52:37.754746   45961 pod_ready.go:92] pod "etcd-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.754769   45961 pod_ready.go:81] duration metric: took 116.377819ms waiting for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.754782   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.798115   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:37.798139   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:52:37.815124   45961 pod_ready.go:92] pod "kube-apiserver-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.815192   45961 pod_ready.go:81] duration metric: took 60.393176ms waiting for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.815218   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.922999   45961 pod_ready.go:92] pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.923022   45961 pod_ready.go:81] duration metric: took 107.794672ms waiting for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.923038   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.995437   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:39.961838   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.382243112s)
	I0919 17:52:39.961884   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961893   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.961902   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.356635779s)
	I0919 17:52:39.961928   45961 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 17:52:39.961843   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.369699378s)
	I0919 17:52:39.961953   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961963   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962202   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962219   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962231   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962239   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962348   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962409   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962447   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962490   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962517   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962540   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962553   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962563   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962526   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962601   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962778   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962819   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962828   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962942   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962959   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962972   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064135   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.06864457s)
	I0919 17:52:40.064196   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064212   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064511   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064532   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064542   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064552   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064775   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064835   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064840   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064850   45961 addons.go:467] Verifying addon metrics-server=true in "no-preload-215748"
	I0919 17:52:40.066741   45961 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0919 17:52:37.876720   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:40.068231   45961 addons.go:502] enable addons completed in 2.750388313s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0919 17:52:40.249105   45961 pod_ready.go:102] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:40.760507   45961 pod_ready.go:92] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.760532   45961 pod_ready.go:81] duration metric: took 2.837485326s waiting for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.760546   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770519   45961 pod_ready.go:92] pod "kube-scheduler-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.770574   45961 pod_ready.go:81] duration metric: took 9.988955ms waiting for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770610   45961 pod_ready.go:38] duration metric: took 3.144808421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:40.770630   45961 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:40.770686   45961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:40.806513   45961 api_server.go:72] duration metric: took 3.358300901s to wait for apiserver process to appear ...
	I0919 17:52:40.806538   45961 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:40.806556   45961 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0919 17:52:40.812758   45961 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0919 17:52:40.813960   45961 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:40.813985   45961 api_server.go:131] duration metric: took 7.436946ms to wait for apiserver health ...
	I0919 17:52:40.813996   45961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:40.821498   45961 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:40.821525   45961 system_pods.go:61] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:40.821536   45961 system_pods.go:61] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:40.821543   45961 system_pods.go:61] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:40.821549   45961 system_pods.go:61] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:40.821555   45961 system_pods.go:61] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:40.821563   45961 system_pods.go:61] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:40.821572   45961 system_pods.go:61] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:40.821583   45961 system_pods.go:61] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:40.821599   45961 system_pods.go:74] duration metric: took 7.595377ms to wait for pod list to return data ...
	I0919 17:52:40.821608   45961 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:40.828423   45961 default_sa.go:45] found service account: "default"
	I0919 17:52:40.828446   45961 default_sa.go:55] duration metric: took 6.830774ms for default service account to be created ...
	I0919 17:52:40.828455   45961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:41.018524   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.018560   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.018569   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.018578   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.018585   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.018591   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.018601   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.018612   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.018625   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.018645   45961 retry.go:31] will retry after 307.254812ms: missing components: kube-dns
	I0919 17:52:41.337815   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.337844   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.337851   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.337856   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.337863   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.337869   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.337875   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.337883   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.337893   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.337915   45961 retry.go:31] will retry after 378.465105ms: missing components: kube-dns
	I0919 17:52:41.734680   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.734717   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.734728   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.734736   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.734743   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.734750   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.734757   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.734765   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.734780   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.734801   45961 retry.go:31] will retry after 432.849904ms: missing components: kube-dns
	I0919 17:52:42.176510   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:42.176536   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Running
	I0919 17:52:42.176545   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:42.176552   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:42.176559   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:42.176569   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:42.176576   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:42.176590   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:42.176603   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Running
	I0919 17:52:42.176616   45961 system_pods.go:126] duration metric: took 1.348155168s to wait for k8s-apps to be running ...
	I0919 17:52:42.176628   45961 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:42.176683   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:42.189952   45961 system_svc.go:56] duration metric: took 13.312874ms WaitForService to wait for kubelet.
	I0919 17:52:42.189981   45961 kubeadm.go:581] duration metric: took 4.741777133s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:42.190012   45961 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:42.194919   45961 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:42.194945   45961 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:42.194957   45961 node_conditions.go:105] duration metric: took 4.939533ms to run NodePressure ...
	I0919 17:52:42.194969   45961 start.go:228] waiting for startup goroutines ...
	I0919 17:52:42.194978   45961 start.go:233] waiting for cluster config update ...
	I0919 17:52:42.194988   45961 start.go:242] writing updated cluster config ...
	I0919 17:52:42.195287   45961 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:42.245669   45961 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:42.248021   45961 out.go:177] * Done! kubectl is now configured to use "no-preload-215748" cluster and "default" namespace by default
	I0919 17:52:41.936906   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.723493225s)
	I0919 17:52:41.936983   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:41.951451   45696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:41.960478   45696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:41.968960   45696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:41.969031   45696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:42.019868   45696 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:42.020027   45696 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:42.171083   45696 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:42.171221   45696 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:42.171332   45696 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:42.429760   45696 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:42.431619   45696 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:42.431770   45696 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:42.431870   45696 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:42.431973   45696 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:42.432172   45696 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:42.432781   45696 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:42.433451   45696 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:42.434353   45696 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:42.435577   45696 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:42.436820   45696 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:42.438302   45696 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:42.439391   45696 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:42.439509   45696 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:42.929570   45696 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:43.332709   45696 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:43.433651   45696 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:43.695104   45696 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:43.696103   45696 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:43.699874   45696 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:43.701784   45696 out.go:204]   - Booting up control plane ...
	I0919 17:52:43.701926   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:43.702063   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:43.702819   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:43.724659   45696 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:43.725576   45696 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:43.725671   45696 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:43.851582   45696 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:43.960637   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:47.032663   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:51.355564   45696 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504191 seconds
	I0919 17:52:51.355695   45696 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:51.376627   45696 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:51.908759   45696 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:51.909064   45696 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-415155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:52.424367   45696 kubeadm.go:322] [bootstrap-token] Using token: kntdz4.46i9d2q57hx70gnb
	I0919 17:52:52.425876   45696 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:52.425993   45696 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:52.433647   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:52.443514   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:52.447239   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:52.453258   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:52.459432   45696 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:52.475208   45696 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:52.722848   45696 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:52.841255   45696 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:52.841280   45696 kubeadm.go:322] 
	I0919 17:52:52.841356   45696 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:52.841369   45696 kubeadm.go:322] 
	I0919 17:52:52.841456   45696 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:52.841464   45696 kubeadm.go:322] 
	I0919 17:52:52.841502   45696 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:52.841568   45696 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:52.841637   45696 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:52.841648   45696 kubeadm.go:322] 
	I0919 17:52:52.841698   45696 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:52.841704   45696 kubeadm.go:322] 
	I0919 17:52:52.841745   45696 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:52.841780   45696 kubeadm.go:322] 
	I0919 17:52:52.841875   45696 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:52.841942   45696 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:52.842039   45696 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:52.842048   45696 kubeadm.go:322] 
	I0919 17:52:52.842134   45696 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:52.842243   45696 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:52.842262   45696 kubeadm.go:322] 
	I0919 17:52:52.842358   45696 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842491   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:52.842523   45696 kubeadm.go:322] 	--control-plane 
	I0919 17:52:52.842530   45696 kubeadm.go:322] 
	I0919 17:52:52.842645   45696 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:52.842659   45696 kubeadm.go:322] 
	I0919 17:52:52.842773   45696 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842930   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:52.844420   45696 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:52.844450   45696 cni.go:84] Creating CNI manager for ""
	I0919 17:52:52.844461   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:52.846322   45696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:52.848269   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:52.875578   45696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:52.905183   45696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:52.905261   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.905281   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=embed-certs-415155 minikube.k8s.io/updated_at=2023_09_19T17_52_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.993717   45696 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:53.208727   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.311165   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.904182   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.403711   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.904152   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:55.404377   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.108640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:55.903772   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.404320   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.904201   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.403637   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.904174   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.404553   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.903691   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.403716   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.903872   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:00.403725   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.180664   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:00.904540   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.404211   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.903897   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.403857   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.903841   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.404601   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.904222   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.404483   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.903813   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:05.404474   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.260629   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.332731   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.904337   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:06.003333   45696 kubeadm.go:1081] duration metric: took 13.098131801s to wait for elevateKubeSystemPrivileges.
	I0919 17:53:06.003365   45696 kubeadm.go:406] StartCluster complete in 5m10.10389936s
	I0919 17:53:06.003387   45696 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.003476   45696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:53:06.005541   45696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.005772   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:53:06.005785   45696 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:53:06.005854   45696 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-415155"
	I0919 17:53:06.005877   45696 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-415155"
	W0919 17:53:06.005884   45696 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:53:06.005926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.005930   45696 addons.go:69] Setting default-storageclass=true in profile "embed-certs-415155"
	I0919 17:53:06.005946   45696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-415155"
	I0919 17:53:06.005979   45696 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:53:06.005982   45696 addons.go:69] Setting metrics-server=true in profile "embed-certs-415155"
	I0919 17:53:06.006009   45696 addons.go:231] Setting addon metrics-server=true in "embed-certs-415155"
	W0919 17:53:06.006026   45696 addons.go:240] addon metrics-server should already be in state true
	I0919 17:53:06.006071   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.006331   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006328   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006364   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006396   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006451   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006493   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.023141   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43557
	I0919 17:53:06.023485   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0919 17:53:06.023646   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0919 17:53:06.023657   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.023882   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024040   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024209   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024230   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024333   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024358   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024616   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024697   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024810   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024827   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.025260   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.025301   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.025486   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.025695   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.026032   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.026062   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.044712   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I0919 17:53:06.045176   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.045627   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.045646   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.045976   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.046161   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.047603   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.049519   45696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:53:06.047878   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0919 17:53:06.052909   45696 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.052922   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:53:06.052937   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.053277   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.053868   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.053887   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.054337   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.054580   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.056666   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.056710   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.058604   45696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:53:06.057084   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.057313   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.060027   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.060046   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:53:06.060060   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:53:06.060079   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.060210   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.060497   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.060815   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.062794   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063165   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.063196   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063327   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.063475   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.063593   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.063701   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.066891   45696 addons.go:231] Setting addon default-storageclass=true in "embed-certs-415155"
	W0919 17:53:06.066905   45696 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:53:06.066926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.066965   45696 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-415155" context rescaled to 1 replicas
	I0919 17:53:06.066987   45696 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.6 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:53:06.068622   45696 out.go:177] * Verifying Kubernetes components...
	I0919 17:53:06.067176   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.070241   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.070253   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:06.085010   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0919 17:53:06.085392   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.085940   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.085976   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.086322   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.086774   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.086820   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.101494   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0919 17:53:06.101938   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.102528   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.102552   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.103014   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.103256   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.104793   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.105087   45696 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.105107   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:53:06.105127   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.107742   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108073   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.108105   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108336   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.108547   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.108744   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.108908   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.205454   45696 node_ready.go:35] waiting up to 6m0s for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.205565   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:53:06.225929   45696 node_ready.go:49] node "embed-certs-415155" has status "Ready":"True"
	I0919 17:53:06.225949   45696 node_ready.go:38] duration metric: took 20.464817ms waiting for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.225957   45696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:06.251954   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:53:06.251981   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:53:06.269198   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.296923   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.314108   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:53:06.314141   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:53:06.338106   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:06.378123   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:06.378154   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:53:06.492313   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:08.235564   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.029959877s)
	I0919 17:53:08.235599   45696 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0919 17:53:08.597917   45696 pod_ready.go:102] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"False"
	I0919 17:53:08.741920   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.44495643s)
	I0919 17:53:08.741982   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.741995   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.741926   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.472691573s)
	I0919 17:53:08.742031   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742050   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742377   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742393   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742403   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742413   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742492   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.742542   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742555   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742566   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742576   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742617   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742630   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742643   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742651   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742771   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742785   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.744274   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.744297   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818418   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.326058126s)
	I0919 17:53:08.818472   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818486   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.818839   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.818891   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.818927   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818938   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818948   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.820442   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.820464   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.820474   45696 addons.go:467] Verifying addon metrics-server=true in "embed-certs-415155"
	I0919 17:53:08.820479   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.822508   45696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 17:53:08.824220   45696 addons.go:502] enable addons completed in 2.818433307s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 17:53:10.561437   45696 pod_ready.go:92] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.561462   45696 pod_ready.go:81] duration metric: took 4.223330172s waiting for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.561472   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568541   45696 pod_ready.go:92] pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.568566   45696 pod_ready.go:81] duration metric: took 7.086927ms waiting for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568579   45696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577684   45696 pod_ready.go:92] pod "etcd-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.577709   45696 pod_ready.go:81] duration metric: took 9.120912ms waiting for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577722   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585005   45696 pod_ready.go:92] pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.585033   45696 pod_ready.go:81] duration metric: took 7.302173ms waiting for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585043   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590934   45696 pod_ready.go:92] pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.590951   45696 pod_ready.go:81] duration metric: took 5.90203ms waiting for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590960   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358510   45696 pod_ready.go:92] pod "kube-proxy-b75j2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.358535   45696 pod_ready.go:81] duration metric: took 767.569086ms waiting for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358544   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759839   45696 pod_ready.go:92] pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.759863   45696 pod_ready.go:81] duration metric: took 401.313058ms waiting for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759872   45696 pod_ready.go:38] duration metric: took 5.533896789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:11.759887   45696 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:53:11.759933   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:53:11.773700   45696 api_server.go:72] duration metric: took 5.706687251s to wait for apiserver process to appear ...
	I0919 17:53:11.773730   45696 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:53:11.773747   45696 api_server.go:253] Checking apiserver healthz at https://192.168.50.6:8443/healthz ...
	I0919 17:53:11.784435   45696 api_server.go:279] https://192.168.50.6:8443/healthz returned 200:
	ok
	I0919 17:53:11.785929   45696 api_server.go:141] control plane version: v1.28.2
	I0919 17:53:11.785952   45696 api_server.go:131] duration metric: took 12.214361ms to wait for apiserver health ...
	I0919 17:53:11.785971   45696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:53:11.961906   45696 system_pods.go:59] 9 kube-system pods found
	I0919 17:53:11.961937   45696 system_pods.go:61] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:11.961945   45696 system_pods.go:61] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:11.961952   45696 system_pods.go:61] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:11.961959   45696 system_pods.go:61] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:11.961967   45696 system_pods.go:61] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:11.961973   45696 system_pods.go:61] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:11.961981   45696 system_pods.go:61] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:11.961991   45696 system_pods.go:61] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:11.962003   45696 system_pods.go:61] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:11.962013   45696 system_pods.go:74] duration metric: took 176.035985ms to wait for pod list to return data ...
	I0919 17:53:11.962027   45696 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:53:12.157305   45696 default_sa.go:45] found service account: "default"
	I0919 17:53:12.157328   45696 default_sa.go:55] duration metric: took 195.295342ms for default service account to be created ...
	I0919 17:53:12.157336   45696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:53:12.359884   45696 system_pods.go:86] 9 kube-system pods found
	I0919 17:53:12.359910   45696 system_pods.go:89] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:12.359916   45696 system_pods.go:89] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:12.359920   45696 system_pods.go:89] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:12.359924   45696 system_pods.go:89] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:12.359929   45696 system_pods.go:89] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:12.359932   45696 system_pods.go:89] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:12.359936   45696 system_pods.go:89] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:12.359943   45696 system_pods.go:89] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:12.359948   45696 system_pods.go:89] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:12.359956   45696 system_pods.go:126] duration metric: took 202.614357ms to wait for k8s-apps to be running ...
	I0919 17:53:12.359962   45696 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:53:12.359999   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:12.373545   45696 system_svc.go:56] duration metric: took 13.572497ms WaitForService to wait for kubelet.
	I0919 17:53:12.373579   45696 kubeadm.go:581] duration metric: took 6.30657382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:53:12.373607   45696 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:53:12.557409   45696 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:53:12.557435   45696 node_conditions.go:123] node cpu capacity is 2
	I0919 17:53:12.557444   45696 node_conditions.go:105] duration metric: took 183.83246ms to run NodePressure ...
	I0919 17:53:12.557455   45696 start.go:228] waiting for startup goroutines ...
	I0919 17:53:12.557461   45696 start.go:233] waiting for cluster config update ...
	I0919 17:53:12.557469   45696 start.go:242] writing updated cluster config ...
	I0919 17:53:12.557699   45696 ssh_runner.go:195] Run: rm -f paused
	I0919 17:53:12.605145   45696 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:53:12.607197   45696 out.go:177] * Done! kubectl is now configured to use "embed-certs-415155" cluster and "default" namespace by default
	I0919 17:53:11.412630   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:14.488732   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:20.564623   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:23.636680   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:29.716717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:32.788701   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:38.868669   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:41.940647   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:48.020643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:51.092656   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:57.172691   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:00.244719   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:03.245602   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:03.245640   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:03.247321   47798 machine.go:91] provisioned docker machine in 4m37.423277683s
	I0919 17:54:03.247365   47798 fix.go:56] fixHost completed within 4m37.445374366s
	I0919 17:54:03.247373   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 4m37.445391375s
	W0919 17:54:03.247389   47798 start.go:688] error starting host: provision: host is not running
	W0919 17:54:03.247488   47798 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0919 17:54:03.247503   47798 start.go:703] Will try again in 5 seconds ...
	I0919 17:54:08.249214   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:54:08.249335   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 79.973µs
	I0919 17:54:08.249367   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:54:08.249377   47798 fix.go:54] fixHost starting: 
	I0919 17:54:08.249707   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:54:08.249734   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:54:08.264866   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I0919 17:54:08.265315   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:54:08.265726   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:54:08.265759   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:54:08.266072   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:54:08.266269   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:08.266419   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:54:08.267941   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Stopped err=<nil>
	I0919 17:54:08.267960   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	W0919 17:54:08.268118   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:54:08.269915   47798 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-100627" ...
	I0919 17:54:08.271210   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Start
	I0919 17:54:08.271445   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring networks are active...
	I0919 17:54:08.272016   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network default is active
	I0919 17:54:08.272329   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network mk-old-k8s-version-100627 is active
	I0919 17:54:08.272743   47798 main.go:141] libmachine: (old-k8s-version-100627) Getting domain xml...
	I0919 17:54:08.273350   47798 main.go:141] libmachine: (old-k8s-version-100627) Creating domain...
	I0919 17:54:09.557879   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting to get IP...
	I0919 17:54:09.558718   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.559190   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.559270   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.559173   48693 retry.go:31] will retry after 309.613104ms: waiting for machine to come up
	I0919 17:54:09.870868   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.871472   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.871496   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.871435   48693 retry.go:31] will retry after 375.744574ms: waiting for machine to come up
	I0919 17:54:10.249255   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.249750   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.249780   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.249702   48693 retry.go:31] will retry after 305.257713ms: waiting for machine to come up
	I0919 17:54:10.556042   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.556587   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.556621   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.556510   48693 retry.go:31] will retry after 394.207165ms: waiting for machine to come up
	I0919 17:54:10.952178   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.952797   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.952828   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.952732   48693 retry.go:31] will retry after 706.704251ms: waiting for machine to come up
	I0919 17:54:11.660566   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:11.661038   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:11.661061   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:11.660988   48693 retry.go:31] will retry after 924.155076ms: waiting for machine to come up
	I0919 17:54:12.586278   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:12.586772   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:12.586805   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:12.586721   48693 retry.go:31] will retry after 1.035300526s: waiting for machine to come up
	I0919 17:54:13.623123   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:13.623597   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:13.623622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:13.623562   48693 retry.go:31] will retry after 1.060639157s: waiting for machine to come up
	I0919 17:54:14.685531   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:14.686012   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:14.686044   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:14.685973   48693 retry.go:31] will retry after 1.61320677s: waiting for machine to come up
	I0919 17:54:16.301447   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:16.301908   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:16.301957   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:16.301864   48693 retry.go:31] will retry after 2.031293541s: waiting for machine to come up
	I0919 17:54:18.334791   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:18.335384   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:18.335440   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:18.335329   48693 retry.go:31] will retry after 1.861837572s: waiting for machine to come up
	I0919 17:54:20.199546   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:20.200058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:20.200088   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:20.200009   48693 retry.go:31] will retry after 2.332364238s: waiting for machine to come up
	I0919 17:54:22.533654   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:22.534131   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:22.534162   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:22.534071   48693 retry.go:31] will retry after 4.475201998s: waiting for machine to come up
	I0919 17:54:27.013553   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014052   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has current primary IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014075   47798 main.go:141] libmachine: (old-k8s-version-100627) Found IP for machine: 192.168.72.182
	I0919 17:54:27.014091   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserving static IP address...
	I0919 17:54:27.014512   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.014535   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | skip adding static IP to network mk-old-k8s-version-100627 - found existing host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"}
	I0919 17:54:27.014560   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserved static IP address: 192.168.72.182
	I0919 17:54:27.014579   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting for SSH to be available...
	I0919 17:54:27.014592   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Getting to WaitForSSH function...
	I0919 17:54:27.016929   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017394   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.017431   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017594   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH client type: external
	I0919 17:54:27.017634   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa (-rw-------)
	I0919 17:54:27.017678   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:54:27.017700   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | About to run SSH command:
	I0919 17:54:27.017711   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | exit 0
	I0919 17:54:27.112557   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | SSH cmd err, output: <nil>: 
	I0919 17:54:27.112933   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:54:27.113574   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.116176   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116556   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.116581   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116841   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:54:27.117019   47798 machine.go:88] provisioning docker machine ...
	I0919 17:54:27.117036   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:27.117261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117429   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:54:27.117447   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117599   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.119667   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.119987   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.120020   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.120131   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.120278   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120442   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120625   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.120795   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.121114   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.121128   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:54:27.264601   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100627
	
	I0919 17:54:27.264628   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.267433   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.267871   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.267906   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.268044   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.268260   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268459   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268589   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.268764   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.269227   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.269258   47798 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-100627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-100627/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-100627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:54:27.408513   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:27.408544   47798 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:54:27.408566   47798 buildroot.go:174] setting up certificates
	I0919 17:54:27.408590   47798 provision.go:83] configureAuth start
	I0919 17:54:27.408607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.408923   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.411896   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412345   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.412376   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412595   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.414909   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415293   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.415331   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415417   47798 provision.go:138] copyHostCerts
	I0919 17:54:27.415479   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:54:27.415491   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:54:27.415556   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:54:27.415662   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:54:27.415675   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:54:27.415721   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:54:27.415941   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:54:27.415954   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:54:27.415990   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:54:27.416043   47798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-100627 san=[192.168.72.182 192.168.72.182 localhost 127.0.0.1 minikube old-k8s-version-100627]
	I0919 17:54:27.473903   47798 provision.go:172] copyRemoteCerts
	I0919 17:54:27.473953   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:54:27.473978   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.476857   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477234   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.477272   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.477649   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.477818   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.477957   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:27.578694   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:54:27.603580   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:54:27.629314   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:54:27.653764   47798 provision.go:86] duration metric: configureAuth took 245.159127ms
	I0919 17:54:27.653788   47798 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:54:27.653989   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:54:27.654081   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.656608   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.657113   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657286   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.657453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657605   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657785   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.657972   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.658276   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.658292   47798 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:54:28.000190   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:54:28.000238   47798 machine.go:91] provisioned docker machine in 883.206741ms
	I0919 17:54:28.000251   47798 start.go:300] post-start starting for "old-k8s-version-100627" (driver="kvm2")
	I0919 17:54:28.000265   47798 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:54:28.000288   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.000617   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:54:28.000650   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.003541   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.003980   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.004027   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.004182   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.004383   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.004583   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.004749   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.099219   47798 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:54:28.103738   47798 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:54:28.103766   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:54:28.103853   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:54:28.103953   47798 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:54:28.104066   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:54:28.115827   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:28.139080   47798 start.go:303] post-start completed in 138.802144ms
	I0919 17:54:28.139102   47798 fix.go:56] fixHost completed within 19.88972528s
	I0919 17:54:28.139121   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.141760   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142169   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.142195   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142396   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.142607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142726   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142917   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.143114   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:28.143573   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:28.143592   47798 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:54:28.277495   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695146068.223192427
	
	I0919 17:54:28.277520   47798 fix.go:206] guest clock: 1695146068.223192427
	I0919 17:54:28.277530   47798 fix.go:219] Guest: 2023-09-19 17:54:28.223192427 +0000 UTC Remote: 2023-09-19 17:54:28.139105122 +0000 UTC m=+302.480491248 (delta=84.087305ms)
	I0919 17:54:28.277553   47798 fix.go:190] guest clock delta is within tolerance: 84.087305ms
	I0919 17:54:28.277559   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 20.02820818s
	I0919 17:54:28.277581   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.277863   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:28.280976   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281274   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.281314   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281491   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282065   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282362   47798 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:54:28.282425   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.282518   47798 ssh_runner.go:195] Run: cat /version.json
	I0919 17:54:28.282557   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.285235   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285574   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285626   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.285660   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285758   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.285980   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286009   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.286037   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.286133   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286185   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.286298   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.286345   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286479   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286613   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.377342   47798 ssh_runner.go:195] Run: systemctl --version
	I0919 17:54:28.402900   47798 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:54:28.551979   47798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:54:28.558949   47798 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:54:28.559040   47798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:54:28.574671   47798 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:54:28.574707   47798 start.go:469] detecting cgroup driver to use...
	I0919 17:54:28.574789   47798 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:54:28.589301   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:54:28.603381   47798 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:54:28.603456   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:54:28.616574   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:54:28.630029   47798 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:54:28.735665   47798 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:54:28.855576   47798 docker.go:212] disabling docker service ...
	I0919 17:54:28.855656   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:54:28.869977   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:54:28.883344   47798 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:54:29.010033   47798 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:54:29.123737   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:54:29.136560   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:54:29.153418   47798 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:54:29.153472   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.164328   47798 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:54:29.164376   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.175468   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.186361   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.197606   47798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:54:29.209144   47798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:54:29.219566   47798 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:54:29.219608   47798 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:54:29.232771   47798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:54:29.241491   47798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:54:29.363253   47798 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:54:29.564774   47798 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:54:29.564853   47798 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:54:29.570170   47798 start.go:537] Will wait 60s for crictl version
	I0919 17:54:29.570236   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:29.574361   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:54:29.613496   47798 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:54:29.613591   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.668331   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.724060   47798 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0919 17:54:29.725565   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:29.728603   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729060   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:29.729090   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729325   47798 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0919 17:54:29.733860   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:29.745878   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:54:29.745937   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:29.783853   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:29.783912   47798 ssh_runner.go:195] Run: which lz4
	I0919 17:54:29.787843   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:54:29.792095   47798 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:54:29.792124   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0919 17:54:31.578682   47798 crio.go:444] Took 1.790863 seconds to copy over tarball
	I0919 17:54:31.578766   47798 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:54:34.491190   47798 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.912396501s)
	I0919 17:54:34.491218   47798 crio.go:451] Took 2.912514 seconds to extract the tarball
	I0919 17:54:34.491227   47798 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:54:34.532896   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:34.584238   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:34.584259   47798 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:54:34.584318   47798 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.584343   47798 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:54:34.584357   47798 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.584378   47798 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.584540   47798 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.584551   47798 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.584565   47798 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.584321   47798 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.586253   47798 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.586228   47798 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.586234   47798 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:54:34.586352   47798 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.586266   47798 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586581   47798 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.759785   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0919 17:54:34.802920   47798 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0919 17:54:34.802955   47798 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0919 17:54:34.803013   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:34.807458   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0919 17:54:34.847013   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0919 17:54:34.847128   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852501   47798 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0919 17:54:34.852523   47798 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852579   47798 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.853807   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.857117   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.858504   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.859676   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.868306   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.920560   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:35.645907   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:37.386271   47798 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.533664793s)
	I0919 17:54:37.386302   47798 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0919 17:54:37.386337   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2: (2.532490506s)
	I0919 17:54:37.386377   47798 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0919 17:54:37.386391   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0: (2.529252811s)
	I0919 17:54:37.386410   47798 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.386437   47798 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0919 17:54:37.386458   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386462   47798 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.386469   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0: (2.527943734s)
	I0919 17:54:37.386508   47798 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0919 17:54:37.386516   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386529   47798 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.386549   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0: (2.526835511s)
	I0919 17:54:37.386581   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0: (2.518230422s)
	I0919 17:54:37.386605   47798 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0919 17:54:37.386609   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0: (2.466014033s)
	I0919 17:54:37.386609   47798 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0919 17:54:37.386628   47798 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.386629   47798 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.386638   47798 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0919 17:54:37.386566   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386662   47798 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.386765   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386701   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.740765346s)
	I0919 17:54:37.399029   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.399077   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.399121   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.399122   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.402150   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.402313   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.540994   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0919 17:54:37.541026   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0919 17:54:37.541059   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0919 17:54:37.541106   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0919 17:54:37.541145   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0919 17:54:37.549028   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0919 17:54:37.549081   47798 cache_images.go:92] LoadImages completed in 2.964810789s
	W0919 17:54:37.549147   47798 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0919 17:54:37.549230   47798 ssh_runner.go:195] Run: crio config
	I0919 17:54:37.603915   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:37.603954   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:37.603977   47798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:54:37.604007   47798 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100627 NodeName:old-k8s-version-100627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 17:54:37.604180   47798 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-100627"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-100627
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.182:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:54:37.604310   47798 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-100627 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:54:37.604383   47798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0919 17:54:37.614235   47798 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:54:37.614296   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:54:37.623423   47798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0919 17:54:37.640384   47798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:54:37.656081   47798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0919 17:54:37.672787   47798 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0919 17:54:37.676417   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:37.687828   47798 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627 for IP: 192.168.72.182
	I0919 17:54:37.687874   47798 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:54:37.688058   47798 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:54:37.688143   47798 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:54:37.688222   47798 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.key
	I0919 17:54:37.688279   47798 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032
	I0919 17:54:37.688322   47798 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key
	I0919 17:54:37.688488   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:54:37.688531   47798 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:54:37.688546   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:54:37.688579   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:54:37.688609   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:54:37.688636   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:54:37.688697   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:37.689406   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:54:37.714671   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:54:37.737884   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:54:37.761839   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:54:37.784692   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:54:37.810865   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:54:37.832897   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:54:37.856026   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:54:37.879335   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:54:37.902377   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:54:37.924388   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:54:37.948816   47798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:54:37.965669   47798 ssh_runner.go:195] Run: openssl version
	I0919 17:54:37.971227   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:54:37.983269   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988756   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988807   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.994392   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:54:38.006098   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:54:38.017868   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022601   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022655   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.028421   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:54:38.039288   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:54:38.053131   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057881   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057938   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.063816   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:54:38.074972   47798 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:54:38.080260   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:54:38.085942   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:54:38.091638   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:54:38.097282   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:54:38.103194   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:54:38.109759   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:54:38.115202   47798 kubeadm.go:404] StartCluster: {Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:54:38.115274   47798 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:54:38.115313   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:38.153988   47798 cri.go:89] found id: ""
	I0919 17:54:38.154063   47798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:54:38.164888   47798 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:54:38.164913   47798 kubeadm.go:636] restartCluster start
	I0919 17:54:38.164965   47798 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:54:38.174810   47798 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.175856   47798 kubeconfig.go:92] found "old-k8s-version-100627" server: "https://192.168.72.182:8443"
	I0919 17:54:38.178372   47798 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:54:38.187917   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.187969   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.199654   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.199674   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.199715   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.211155   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.712221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.712312   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.725306   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.211431   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.211494   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.223919   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.711400   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.711482   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.724103   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.211311   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.211379   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.224111   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.711529   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.711609   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.724291   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.212183   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.212285   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.225226   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.711742   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.711821   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.724590   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.212221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.212289   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.225772   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.711304   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.711378   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.724468   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.211895   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.211978   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.225017   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.711734   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.711824   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.724995   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.211535   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.211616   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.224372   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.712113   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.712179   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.725330   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.211942   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.212027   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.226290   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.712216   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.712295   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.725065   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.212053   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.212150   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.226417   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.711997   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.712082   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.725608   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.212214   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.212300   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.224935   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.711452   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.711540   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.723970   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:48.188749   47798 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:54:48.188785   47798 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:54:48.188800   47798 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 17:54:48.188862   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:48.227729   47798 cri.go:89] found id: ""
	I0919 17:54:48.227789   47798 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:54:48.243618   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:54:48.253221   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:54:48.253285   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262806   47798 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262831   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:48.405093   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.114151   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.324152   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.457833   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.554530   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:54:49.554595   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:49.568050   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.092864   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.592484   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.092979   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.114757   47798 api_server.go:72] duration metric: took 1.560225697s to wait for apiserver process to appear ...
	I0919 17:54:51.114781   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:54:51.114800   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:56.115914   47798 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 17:54:56.115962   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:57.769883   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:54:57.769915   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:54:58.270598   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.278169   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.278210   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:58.770880   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.778649   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.778679   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:59.270233   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:59.276275   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 17:54:59.283868   47798 api_server.go:141] control plane version: v1.16.0
	I0919 17:54:59.283896   47798 api_server.go:131] duration metric: took 8.169106612s to wait for apiserver health ...
	I0919 17:54:59.283908   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:59.283916   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:59.285960   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:54:59.287537   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:54:59.298142   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:54:59.315861   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:54:59.324878   47798 system_pods.go:59] 8 kube-system pods found
	I0919 17:54:59.324917   47798 system_pods.go:61] "coredns-5644d7b6d9-4mh4f" [382ef590-a6ef-4402-8762-1649f060fbc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324940   47798 system_pods.go:61] "coredns-5644d7b6d9-wqwp7" [8756ca49-2953-422d-a534-6d1fa5655fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324947   47798 system_pods.go:61] "etcd-old-k8s-version-100627" [1e7bdb28-9c7e-4cae-a87e-ec2fad64e820] Running
	I0919 17:54:59.324955   47798 system_pods.go:61] "kube-apiserver-old-k8s-version-100627" [59a703b6-7c16-48ba-8a78-c1ecd606f138] Running
	I0919 17:54:59.324966   47798 system_pods.go:61] "kube-controller-manager-old-k8s-version-100627" [ac10d741-9a7d-45a1-86f5-a912075b49b9] Running
	I0919 17:54:59.324971   47798 system_pods.go:61] "kube-proxy-j7kqn" [79381ec1-45a7-4424-8383-f97b530979d3] Running
	I0919 17:54:59.324986   47798 system_pods.go:61] "kube-scheduler-old-k8s-version-100627" [40df95ee-b184-48ff-b276-d01c7763c7fc] Running
	I0919 17:54:59.324993   47798 system_pods.go:61] "storage-provisioner" [00e5e0c9-0453-440b-aa5c-e6811f428297] Running
	I0919 17:54:59.325005   47798 system_pods.go:74] duration metric: took 9.119135ms to wait for pod list to return data ...
	I0919 17:54:59.325017   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:54:59.328813   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:54:59.328845   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 17:54:59.328859   47798 node_conditions.go:105] duration metric: took 3.833575ms to run NodePressure ...
	I0919 17:54:59.328879   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:59.658953   47798 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:54:59.662655   47798 retry.go:31] will retry after 352.037588ms: kubelet not initialised
	I0919 17:55:00.020425   47798 retry.go:31] will retry after 411.927656ms: kubelet not initialised
	I0919 17:55:00.438027   47798 retry.go:31] will retry after 483.370654ms: kubelet not initialised
	I0919 17:55:00.928598   47798 retry.go:31] will retry after 987.946924ms: kubelet not initialised
	I0919 17:55:01.923328   47798 retry.go:31] will retry after 1.679023275s: kubelet not initialised
	I0919 17:55:03.607494   47798 retry.go:31] will retry after 1.92599571s: kubelet not initialised
	I0919 17:55:05.539070   47798 retry.go:31] will retry after 2.735570072s: kubelet not initialised
	I0919 17:55:08.280198   47798 retry.go:31] will retry after 4.516491636s: kubelet not initialised
	I0919 17:55:12.803629   47798 retry.go:31] will retry after 9.24421999s: kubelet not initialised
	I0919 17:55:22.053509   47798 retry.go:31] will retry after 10.860983763s: kubelet not initialised
	I0919 17:55:32.921288   47798 retry.go:31] will retry after 19.590918142s: kubelet not initialised
	I0919 17:55:52.517612   47798 kubeadm.go:787] kubelet initialised
	I0919 17:55:52.517637   47798 kubeadm.go:788] duration metric: took 52.858662322s waiting for restarted kubelet to initialise ...
	I0919 17:55:52.517644   47798 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:55:52.523992   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530133   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.530151   47798 pod_ready.go:81] duration metric: took 6.127596ms waiting for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530160   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535186   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.535202   47798 pod_ready.go:81] duration metric: took 5.035759ms waiting for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535209   47798 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540300   47798 pod_ready.go:92] pod "etcd-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.540317   47798 pod_ready.go:81] duration metric: took 5.101572ms waiting for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540324   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546670   47798 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.546687   47798 pod_ready.go:81] duration metric: took 6.356984ms waiting for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546696   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916320   47798 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.916342   47798 pod_ready.go:81] duration metric: took 369.639886ms waiting for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916353   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316733   47798 pod_ready.go:92] pod "kube-proxy-j7kqn" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.316762   47798 pod_ready.go:81] duration metric: took 400.400609ms waiting for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316788   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717319   47798 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.717344   47798 pod_ready.go:81] duration metric: took 400.544097ms waiting for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717358   47798 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:56.023621   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:55:58.025543   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:00.522985   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:02.523350   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:05.022971   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:07.023767   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:09.524598   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:12.024269   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:14.524109   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:16.525347   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:19.025990   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:21.522712   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:23.523098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:25.525823   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:27.526575   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:30.023751   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:32.023914   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:34.523709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:37.025284   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:39.523886   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:42.023525   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:44.023602   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:46.524942   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:49.023162   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:51.025968   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:53.523737   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:55.524950   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:58.023648   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:00.024635   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:02.024981   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:04.524374   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:07.024495   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:09.523646   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:12.023778   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:14.024012   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:16.024668   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:18.524581   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:20.525264   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:23.024223   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:25.024271   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:27.024863   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:29.524389   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:31.524867   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:34.026361   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:36.523516   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:38.523641   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:40.525417   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:43.023938   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:45.024235   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:47.025554   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:49.524344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:52.023880   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:54.024324   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:56.024615   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:58.523806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:00.524330   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:02.524813   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:05.023667   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:07.024328   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:09.521983   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:11.524126   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:14.033167   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:16.524193   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:19.023478   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:21.023719   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:23.024876   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:25.525000   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:28.022897   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:30.023651   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:32.523506   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:35.023201   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:37.024229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:39.522709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:41.524752   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:44.022121   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:46.025229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:48.523728   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:50.524600   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:53.024769   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:55.523745   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:58.025806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:00.524396   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:03.023037   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:05.023335   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:07.024052   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:09.024205   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:11.523020   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:13.524065   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:16.025098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:18.523293   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:20.525391   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:23.025049   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:25.522619   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:27.525208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:30.024344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:32.024984   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:34.523267   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:36.524365   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:39.023558   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:41.523208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:43.524139   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:46.023918   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:48.523431   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:50.523998   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.024150   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.718434   47798 pod_ready.go:81] duration metric: took 4m0.001059167s waiting for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	E0919 17:59:53.718466   47798 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:59:53.718484   47798 pod_ready.go:38] duration metric: took 4m1.200831266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:59:53.718520   47798 kubeadm.go:640] restartCluster took 5m15.553599416s
	W0919 17:59:53.718575   47798 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:59:53.718604   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:59:58.500835   47798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.782205666s)
	I0919 17:59:58.500900   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:59:58.514207   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:59:58.524054   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:59:58.532896   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:59:58.532945   47798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 17:59:58.588089   47798 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0919 17:59:58.588197   47798 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:59:58.739994   47798 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:59:58.740116   47798 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:59:58.740291   47798 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:59:58.968628   47798 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:59:58.968805   47798 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:59:58.977284   47798 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0919 17:59:59.111196   47798 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:59:59.113466   47798 out.go:204]   - Generating certificates and keys ...
	I0919 17:59:59.113599   47798 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:59:59.113711   47798 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:59:59.113854   47798 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:59:59.113938   47798 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:59:59.114070   47798 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:59:59.114144   47798 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:59:59.114911   47798 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:59:59.115382   47798 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:59:59.115986   47798 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:59:59.116548   47798 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:59:59.116630   47798 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:59:59.116713   47798 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:59:59.334495   47798 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:59:59.627886   47798 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:59:59.967368   47798 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:00:00.114260   47798 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:00:00.115507   47798 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:00:00.117811   47798 out.go:204]   - Booting up control plane ...
	I0919 18:00:00.117935   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:00:00.122651   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:00:00.125112   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:00:00.126687   47798 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:00:00.129807   47798 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 18:00:11.635043   47798 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504905 seconds
	I0919 18:00:11.635206   47798 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:00:11.654058   47798 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:00:12.194702   47798 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:00:12.194899   47798 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-100627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 18:00:12.704504   47798 kubeadm.go:322] [bootstrap-token] Using token: exrkug.z0q4aqb4emd0lkvm
	I0919 18:00:12.706136   47798 out.go:204]   - Configuring RBAC rules ...
	I0919 18:00:12.706241   47798 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:00:12.721292   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:00:12.729553   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:00:12.735434   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:00:12.739232   47798 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:00:12.816288   47798 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 18:00:13.140789   47798 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 18:00:13.142170   47798 kubeadm.go:322] 
	I0919 18:00:13.142257   47798 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 18:00:13.142268   47798 kubeadm.go:322] 
	I0919 18:00:13.142338   47798 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 18:00:13.142348   47798 kubeadm.go:322] 
	I0919 18:00:13.142382   47798 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 18:00:13.142468   47798 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:00:13.142554   47798 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:00:13.142571   47798 kubeadm.go:322] 
	I0919 18:00:13.142642   47798 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 18:00:13.142734   47798 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:00:13.142826   47798 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:00:13.142841   47798 kubeadm.go:322] 
	I0919 18:00:13.142952   47798 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0919 18:00:13.143062   47798 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 18:00:13.143073   47798 kubeadm.go:322] 
	I0919 18:00:13.143177   47798 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143336   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 18:00:13.143374   47798 kubeadm.go:322]     --control-plane 	  
	I0919 18:00:13.143387   47798 kubeadm.go:322] 
	I0919 18:00:13.143501   47798 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:00:13.143511   47798 kubeadm.go:322] 
	I0919 18:00:13.143613   47798 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143744   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 18:00:13.144341   47798 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:00:13.144373   47798 cni.go:84] Creating CNI manager for ""
	I0919 18:00:13.144392   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:00:13.146075   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:00:13.148011   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:00:13.159265   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 18:00:13.178271   47798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:00:13.178388   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.178420   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=old-k8s-version-100627 minikube.k8s.io/updated_at=2023_09_19T18_00_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.212392   47798 ops.go:34] apiserver oom_adj: -16
	I0919 18:00:13.509743   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.611752   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.210418   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.710689   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.210316   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.710515   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.210852   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.710451   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.210179   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.710559   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.210390   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.710683   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.210573   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.710581   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.210732   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.710461   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.210702   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.709813   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.209903   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.709847   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.210276   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.710692   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.210645   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.710835   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.209793   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.710473   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.209945   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.710136   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.210552   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.710679   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.209990   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.365531   47798 kubeadm.go:1081] duration metric: took 15.187210441s to wait for elevateKubeSystemPrivileges.
	I0919 18:00:28.365564   47798 kubeadm.go:406] StartCluster complete in 5m50.250366407s
	I0919 18:00:28.365586   47798 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.365675   47798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:00:28.368279   47798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.368566   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:00:28.368696   47798 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 18:00:28.368769   47798 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368797   47798 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-100627"
	I0919 18:00:28.368803   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 18:00:28.368850   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368863   47798 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368878   47798 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-100627"
	W0919 18:00:28.368886   47798 addons.go:240] addon metrics-server should already be in state true
	I0919 18:00:28.368922   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368851   47798 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368982   47798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100627"
	I0919 18:00:28.369268   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369273   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369292   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369294   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369392   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369412   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.389023   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0919 18:00:28.389631   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.389718   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
	I0919 18:00:28.390023   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390257   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0919 18:00:28.390523   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390547   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390646   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390895   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391311   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391391   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.391418   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.391709   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391712   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391748   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391757   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391791   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391838   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.410811   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0919 18:00:28.410846   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0919 18:00:28.411329   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411366   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411777   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411796   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.411888   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411905   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.412177   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412219   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412326   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.412402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.414149   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.417333   47798 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 18:00:28.414621   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.419038   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:00:28.419051   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:00:28.419071   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.420833   47798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:00:28.422332   47798 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.422358   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:00:28.422378   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.422103   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.422902   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.422992   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.423016   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.423112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.423305   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.423474   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.425328   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425845   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.425869   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425895   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.426078   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.426219   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.426322   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.464699   47798 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-100627"
	I0919 18:00:28.464737   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.465028   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.465059   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.479442   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0919 18:00:28.479839   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.480266   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.480294   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.480676   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.481211   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.481248   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.495810   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0919 18:00:28.496299   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.496709   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.496740   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.497099   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.497375   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.499150   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.499406   47798 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.499420   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:00:28.499434   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.502227   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.502653   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502792   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.502961   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.503112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.503256   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.738306   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:00:28.738334   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 18:00:28.739481   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.753537   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.807289   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:00:28.807321   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:00:28.904080   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:28.904107   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:00:28.991114   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:29.327327   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:00:29.371292   47798 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-100627" context rescaled to 1 replicas
	I0919 18:00:29.371337   47798 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:00:29.373222   47798 out.go:177] * Verifying Kubernetes components...
	I0919 18:00:29.374912   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:00:30.105746   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366227457s)
	I0919 18:00:30.105776   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.352204878s)
	I0919 18:00:30.105793   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105805   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.105814   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105827   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106180   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 18:00:30.106222   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106236   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106246   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106259   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106357   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106373   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106396   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106408   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106486   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106500   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106513   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106522   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106592   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106602   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106826   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106842   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.185977   47798 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0919 18:00:30.185980   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.194821805s)
	I0919 18:00:30.186035   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186031   47798 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.186049   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186367   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186383   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186393   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186647   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186671   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186681   47798 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-100627"
	I0919 18:00:30.188971   47798 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 18:00:30.190949   47798 addons.go:502] enable addons completed in 1.822257993s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 18:00:30.236503   47798 node_ready.go:49] node "old-k8s-version-100627" has status "Ready":"True"
	I0919 18:00:30.236526   47798 node_ready.go:38] duration metric: took 50.473068ms waiting for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.236538   47798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:30.243959   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:32.262563   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:34.263997   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:36.762957   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:37.763670   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.763694   47798 pod_ready.go:81] duration metric: took 7.519708991s waiting for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.763704   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769351   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.769371   47798 pod_ready.go:81] duration metric: took 5.660975ms waiting for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769382   47798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773846   47798 pod_ready.go:92] pod "kube-proxy-x7p9v" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.773866   47798 pod_ready.go:81] duration metric: took 4.476479ms waiting for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773879   47798 pod_ready.go:38] duration metric: took 7.537327576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:37.773896   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:00:37.773947   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:00:37.789245   47798 api_server.go:72] duration metric: took 8.417877969s to wait for apiserver process to appear ...
	I0919 18:00:37.789267   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:00:37.789283   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 18:00:37.796929   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 18:00:37.798217   47798 api_server.go:141] control plane version: v1.16.0
	I0919 18:00:37.798233   47798 api_server.go:131] duration metric: took 8.960108ms to wait for apiserver health ...
	I0919 18:00:37.798240   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:00:37.802732   47798 system_pods.go:59] 5 kube-system pods found
	I0919 18:00:37.802751   47798 system_pods.go:61] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.802755   47798 system_pods.go:61] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.802759   47798 system_pods.go:61] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.802765   47798 system_pods.go:61] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.802771   47798 system_pods.go:61] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.802775   47798 system_pods.go:74] duration metric: took 4.531294ms to wait for pod list to return data ...
	I0919 18:00:37.802781   47798 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:00:37.805090   47798 default_sa.go:45] found service account: "default"
	I0919 18:00:37.805108   47798 default_sa.go:55] duration metric: took 2.323003ms for default service account to be created ...
	I0919 18:00:37.805115   47798 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:00:37.809387   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:37.809412   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.809421   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.809428   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.809437   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.809445   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.809492   47798 retry.go:31] will retry after 308.50392ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.123229   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.123251   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.123256   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.123262   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.123271   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.123277   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.123291   47798 retry.go:31] will retry after 322.697394ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.452201   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.452227   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.452232   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.452236   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.452242   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.452248   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.452263   47798 retry.go:31] will retry after 457.851598ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.916270   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.916309   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.916318   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.916325   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.916336   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.916345   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.916367   47798 retry.go:31] will retry after 438.479707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:39.360169   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:39.360194   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:39.360199   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:39.360203   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:39.360210   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:39.360214   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:39.360228   47798 retry.go:31] will retry after 636.764599ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.002876   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.002902   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.002907   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.002911   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.002918   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.002922   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.002936   47798 retry.go:31] will retry after 763.456742ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.771715   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.771743   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.771751   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.771758   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.771768   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.771777   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.771794   47798 retry.go:31] will retry after 849.595493ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:41.628988   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:41.629014   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:41.629019   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:41.629024   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:41.629030   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:41.629035   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:41.629048   47798 retry.go:31] will retry after 1.130396523s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:42.765798   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:42.765825   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:42.765830   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:42.765834   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:42.765841   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:42.765846   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:42.765861   47798 retry.go:31] will retry after 1.444918771s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:44.216701   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:44.216726   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:44.216731   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:44.216735   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:44.216743   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:44.216751   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:44.216769   47798 retry.go:31] will retry after 2.010339666s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:46.233732   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:46.233764   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:46.233772   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:46.233779   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:46.233789   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:46.233798   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:46.233817   47798 retry.go:31] will retry after 2.386355588s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:48.625414   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:48.625451   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:48.625458   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:48.625463   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:48.625469   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:48.625478   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:48.625496   47798 retry.go:31] will retry after 3.40684833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:52.037490   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:52.037516   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:52.037522   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:52.037526   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:52.037532   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:52.037538   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:52.037553   47798 retry.go:31] will retry after 4.080274795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:56.123283   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:56.123307   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:56.123312   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:56.123316   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:56.123322   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:56.123327   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:56.123341   47798 retry.go:31] will retry after 4.076928493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:00.205817   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:00.205842   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:00.205848   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:00.205851   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:00.205860   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:00.205865   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:00.205880   47798 retry.go:31] will retry after 6.340158574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:06.551794   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:06.551821   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:06.551829   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:06.551835   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:06.551844   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:06.551852   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:06.551870   47798 retry.go:31] will retry after 8.178931758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:14.737898   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:14.737926   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:14.737934   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:14.737941   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:14.737947   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Pending
	I0919 18:01:14.737955   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:14.737961   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Pending
	I0919 18:01:14.737969   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:14.737977   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:14.737996   47798 retry.go:31] will retry after 7.690456991s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:47:19 UTC, ends at Tue 2023-09-19 18:01:21 UTC. --
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.632779255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146481632765834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6f860eec-0209-480f-81b4-eeac7ec47cee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.633637860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b47a2b97-e061-4d0b-bc71-d0ee4f5605ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.633690116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b47a2b97-e061-4d0b-bc71-d0ee4f5605ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.633937011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b47a2b97-e061-4d0b-bc71-d0ee4f5605ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.676663572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=40ccc2fe-047d-41d3-97f7-e058490d2635 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.676720216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=40ccc2fe-047d-41d3-97f7-e058490d2635 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.677959434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1a24cc6b-4163-44eb-b98f-c41c5f9d1168 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.678378920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146481678364904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1a24cc6b-4163-44eb-b98f-c41c5f9d1168 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.678843346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c462d5eb-39b6-4a0b-8a34-be71ba7d8d8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.678885724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c462d5eb-39b6-4a0b-8a34-be71ba7d8d8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.679156026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c462d5eb-39b6-4a0b-8a34-be71ba7d8d8e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.722544843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e4a9b6ed-c3aa-49c3-9ca3-11d91edf0e4c name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.722621695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e4a9b6ed-c3aa-49c3-9ca3-11d91edf0e4c name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.723890471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e4a54c65-3a81-470d-8660-3a25d789e9ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.724431690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146481724416220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e4a54c65-3a81-470d-8660-3a25d789e9ba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.725146545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fbf7b398-8232-4809-b377-04786ac7e5a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.725229417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fbf7b398-8232-4809-b377-04786ac7e5a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.725410752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fbf7b398-8232-4809-b377-04786ac7e5a8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.760737163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=03ec4eec-8bcf-4ff4-b958-2172418ef418 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.760837971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=03ec4eec-8bcf-4ff4-b958-2172418ef418 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.762465360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d7fa643b-663d-4ebc-81d8-ba8ab01bc0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.763238862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146481763220885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d7fa643b-663d-4ebc-81d8-ba8ab01bc0d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.763913119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22b5e1c3-fe2f-4d9e-9797-bd7402ce3f9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.764069537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22b5e1c3-fe2f-4d9e-9797-bd7402ce3f9b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:21 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:01:21.764255257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22b5e1c3-fe2f-4d9e-9797-bd7402ce3f9b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7e1ede777c67       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   df3f3e0c92506       storage-provisioner
	5c82e4737fd63       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   fb8b0fd71c3c4       busybox
	6165f78e9f3be       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   52fd365d383f4       coredns-5dd5756b68-6fxz5
	52ed624ea25f5       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      13 minutes ago      Running             kube-proxy                1                   4f015308d1a02       kube-proxy-5cghw
	9055f7f0e2b85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   df3f3e0c92506       storage-provisioner
	23740abdea376       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      13 minutes ago      Running             kube-scheduler            1                   85199eeddffc5       kube-scheduler-default-k8s-diff-port-415555
	3ead0fadb5c30       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      13 minutes ago      Running             kube-controller-manager   1                   5db4458e45c83       kube-controller-manager-default-k8s-diff-port-415555
	837d6df2a022c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   94cada6b81dbb       etcd-default-k8s-diff-port-415555
	54b31f09971f1       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      13 minutes ago      Running             kube-apiserver            1                   282bac4b12985       kube-apiserver-default-k8s-diff-port-415555
	
	* 
	* ==> coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58811 - 63633 "HINFO IN 6878534768593487844.6714548147103407529. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016870824s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-415555
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-415555
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=default-k8s-diff-port-415555
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_40_51_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:40:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-415555
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 18:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:58:36 +0000   Tue, 19 Sep 2023 17:40:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:58:36 +0000   Tue, 19 Sep 2023 17:40:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:58:36 +0000   Tue, 19 Sep 2023 17:40:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:58:36 +0000   Tue, 19 Sep 2023 17:48:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.228
	  Hostname:    default-k8s-diff-port-415555
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 99e6d1633d0a4bbbbcd368c587e05c2e
	  System UUID:                99e6d163-3d0a-4bbb-bcd3-68c587e05c2e
	  Boot ID:                    ce7fb3ba-3d90-469a-92f4-eb71fae2ed96
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-6fxz5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-415555                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-415555             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-415555    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-5cghw                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-415555             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-vq4p7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-415555 event: Registered Node default-k8s-diff-port-415555 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-415555 event: Registered Node default-k8s-diff-port-415555 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.400516] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.365776] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149472] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.693761] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.488880] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.128331] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.165634] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.123149] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.233529] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.451828] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +15.366472] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] <==
	* {"level":"info","ts":"2023-09-19T17:47:50.308651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.228:2379"}
	{"level":"info","ts":"2023-09-19T17:47:50.309089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:47:50.309925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:47:50.312145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:47:50.312205Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:47:55.650977Z","caller":"traceutil/trace.go:171","msg":"trace[1747419760] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"161.750593ms","start":"2023-09-19T17:47:55.489206Z","end":"2023-09-19T17:47:55.650956Z","steps":["trace[1747419760] 'process raft request'  (duration: 161.371498ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:47:56.18278Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.650674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4012"}
	{"level":"info","ts":"2023-09-19T17:47:56.182877Z","caller":"traceutil/trace.go:171","msg":"trace[714608182] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:545; }","duration":"376.761783ms","start":"2023-09-19T17:47:55.8061Z","end":"2023-09-19T17:47:56.182862Z","steps":["trace[714608182] 'range keys from in-memory index tree'  (duration: 376.446832ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:47:56.182914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:47:55.806081Z","time spent":"376.823192ms","remote":"127.0.0.1:42496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4036,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-09-19T17:47:56.183284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.36214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2023-09-19T17:47:56.183347Z","caller":"traceutil/trace.go:171","msg":"trace[1576469703] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:1; response_revision:545; }","duration":"356.428318ms","start":"2023-09-19T17:47:55.826909Z","end":"2023-09-19T17:47:56.183338Z","steps":["trace[1576469703] 'range keys from in-memory index tree'  (duration: 356.283714ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:47:56.183376Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:47:55.826895Z","time spent":"356.472683ms","remote":"127.0.0.1:42478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" "}
	{"level":"info","ts":"2023-09-19T17:47:56.493505Z","caller":"traceutil/trace.go:171","msg":"trace[863101012] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"302.278108ms","start":"2023-09-19T17:47:56.191209Z","end":"2023-09-19T17:47:56.493487Z","steps":["trace[863101012] 'process raft request'  (duration: 302.174264ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:47:56.494129Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:47:56.191196Z","time spent":"302.412295ms","remote":"127.0.0.1:42410","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":817,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-5dd5756b68-6fxz5.17865e02076121a0\" mod_revision:537 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5dd5756b68-6fxz5.17865e02076121a0\" value_size:729 lease:8088617413071939444 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-5dd5756b68-6fxz5.17865e02076121a0\" > >"}
	{"level":"warn","ts":"2023-09-19T17:47:56.933447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.746924ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17311989449926715589 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/busybox.17865e020888d99f\" mod_revision:538 > success:<request_put:<key:\"/registry/events/default/busybox.17865e020888d99f\" value_size:700 lease:8088617413071939444 >> failure:<request_range:<key:\"/registry/events/default/busybox.17865e020888d99f\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-19T17:47:56.933604Z","caller":"traceutil/trace.go:171","msg":"trace[1347015470] linearizableReadLoop","detail":"{readStateIndex:586; appliedIndex:585; }","duration":"203.997174ms","start":"2023-09-19T17:47:56.729594Z","end":"2023-09-19T17:47:56.933592Z","steps":["trace[1347015470] 'read index received'  (duration: 71.915415ms)","trace[1347015470] 'applied index is now lower than readState.Index'  (duration: 132.079977ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-19T17:47:56.933717Z","caller":"traceutil/trace.go:171","msg":"trace[1552820509] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"430.016396ms","start":"2023-09-19T17:47:56.503685Z","end":"2023-09-19T17:47:56.933701Z","steps":["trace[1552820509] 'process raft request'  (duration: 297.868465ms)","trace[1552820509] 'compare'  (duration: 131.376819ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-19T17:47:56.934245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:47:56.503672Z","time spent":"430.536091ms","remote":"127.0.0.1:42410","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17865e020888d99f\" mod_revision:538 > success:<request_put:<key:\"/registry/events/default/busybox.17865e020888d99f\" value_size:700 lease:8088617413071939444 >> failure:<request_range:<key:\"/registry/events/default/busybox.17865e020888d99f\" > >"}
	{"level":"warn","ts":"2023-09-19T17:47:56.93383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.252292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-415555\" ","response":"range_response_count:1 size:5714"}
	{"level":"info","ts":"2023-09-19T17:47:56.934441Z","caller":"traceutil/trace.go:171","msg":"trace[542020603] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-415555; range_end:; response_count:1; response_revision:547; }","duration":"204.866463ms","start":"2023-09-19T17:47:56.729564Z","end":"2023-09-19T17:47:56.93443Z","steps":["trace[542020603] 'agreement among raft nodes before linearized reading'  (duration: 204.179648ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:54:36.61231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.962446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T17:54:36.612668Z","caller":"traceutil/trace.go:171","msg":"trace[215829185] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:911; }","duration":"179.376736ms","start":"2023-09-19T17:54:36.433264Z","end":"2023-09-19T17:54:36.61264Z","steps":["trace[215829185] 'range keys from in-memory index tree'  (duration: 178.887804ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T17:57:50.392145Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":823}
	{"level":"info","ts":"2023-09-19T17:57:50.398835Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":823,"took":"4.702267ms","hash":3603110917}
	{"level":"info","ts":"2023-09-19T17:57:50.398963Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3603110917,"revision":823,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:01:22 up 14 min,  0 users,  load average: 0.24, 0.20, 0.11
	Linux default-k8s-diff-port-415555 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] <==
	* I0919 17:57:52.366888       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 17:57:53.367696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:57:53.367794       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 17:57:53.367866       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 17:57:53.367799       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:57:53.368251       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:57:53.369034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:58:52.209755       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 17:58:53.368969       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:58:53.369164       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 17:58:53.369210       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 17:58:53.369260       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:58:53.369358       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:58:53.371325       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:59:52.209712       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:00:52.210830       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:00:53.370098       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:00:53.370199       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:00:53.370234       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:00:53.372585       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:00:53.372765       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:00:53.372798       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] <==
	* I0919 17:55:35.733518       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:56:05.242656       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:56:05.748859       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:56:35.247968       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:56:35.759901       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:57:05.254854       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:57:05.769523       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:57:35.261643       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:57:35.784628       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:58:05.267944       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:58:05.794198       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:58:35.274206       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:58:35.803662       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 17:58:53.502204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="430.397µs"
	E0919 17:59:05.286408       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:59:05.812443       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 17:59:06.502529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="116.275µs"
	E0919 17:59:35.292438       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:59:35.827163       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:00:05.298712       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:00:05.837295       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:00:35.304850       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:00:35.846412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:01:05.316813       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:01:05.857175       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] <==
	* I0919 17:47:53.900145       1 server_others.go:69] "Using iptables proxy"
	I0919 17:47:53.916483       1 node.go:141] Successfully retrieved node IP: 192.168.61.228
	I0919 17:47:53.984386       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:47:53.984480       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:47:54.001843       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:47:54.002378       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:47:54.003485       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:47:54.004708       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:47:54.007325       1 config.go:188] "Starting service config controller"
	I0919 17:47:54.007390       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:47:54.007436       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:47:54.007459       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:47:54.008478       1 config.go:315] "Starting node config controller"
	I0919 17:47:54.009170       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:47:54.108250       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:47:54.108237       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:47:54.109694       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] <==
	* I0919 17:47:49.767329       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:47:52.319207       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:47:52.319370       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:47:52.319479       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:47:52.319509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:47:52.409675       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:47:52.409779       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:47:52.418361       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 17:47:52.418732       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 17:47:52.429393       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 17:47:52.432070       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 17:47:52.523798       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:47:19 UTC, ends at Tue 2023-09-19 18:01:22 UTC. --
	Sep 19 17:58:41 default-k8s-diff-port-415555 kubelet[932]: E0919 17:58:41.497557     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 17:58:45 default-k8s-diff-port-415555 kubelet[932]: E0919 17:58:45.497965     932 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:58:45 default-k8s-diff-port-415555 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:58:45 default-k8s-diff-port-415555 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:58:45 default-k8s-diff-port-415555 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:58:53 default-k8s-diff-port-415555 kubelet[932]: E0919 17:58:53.485812     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 17:59:06 default-k8s-diff-port-415555 kubelet[932]: E0919 17:59:06.484266     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 17:59:21 default-k8s-diff-port-415555 kubelet[932]: E0919 17:59:21.491287     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 17:59:36 default-k8s-diff-port-415555 kubelet[932]: E0919 17:59:36.484697     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 17:59:45 default-k8s-diff-port-415555 kubelet[932]: E0919 17:59:45.498781     932 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:59:45 default-k8s-diff-port-415555 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:59:45 default-k8s-diff-port-415555 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:59:45 default-k8s-diff-port-415555 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:59:51 default-k8s-diff-port-415555 kubelet[932]: E0919 17:59:51.483972     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:00:02 default-k8s-diff-port-415555 kubelet[932]: E0919 18:00:02.484317     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:00:13 default-k8s-diff-port-415555 kubelet[932]: E0919 18:00:13.483701     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:00:26 default-k8s-diff-port-415555 kubelet[932]: E0919 18:00:26.484151     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:00:38 default-k8s-diff-port-415555 kubelet[932]: E0919 18:00:38.484593     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:00:45 default-k8s-diff-port-415555 kubelet[932]: E0919 18:00:45.502434     932 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:00:45 default-k8s-diff-port-415555 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:00:45 default-k8s-diff-port-415555 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:00:45 default-k8s-diff-port-415555 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:00:52 default-k8s-diff-port-415555 kubelet[932]: E0919 18:00:52.483887     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:01:03 default-k8s-diff-port-415555 kubelet[932]: E0919 18:01:03.485182     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:01:18 default-k8s-diff-port-415555 kubelet[932]: E0919 18:01:18.484973     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	
	* 
	* ==> storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] <==
	* I0919 17:47:53.653660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 17:48:23.669872       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] <==
	* I0919 17:48:24.833529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 17:48:24.852143       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 17:48:24.852230       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 17:48:42.259190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de8a1b71-0678-4f8a-80b6-13fe53c9d27a", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-415555_d24e9b48-8ae7-4ef8-a7c5-bcf71a3f09c6 became leader
	I0919 17:48:42.259679       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 17:48:42.259880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-415555_d24e9b48-8ae7-4ef8-a7c5-bcf71a3f09c6!
	I0919 17:48:42.361227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-415555_d24e9b48-8ae7-4ef8-a7c5-bcf71a3f09c6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vq4p7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 describe pod metrics-server-57f55c9bc5-vq4p7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-415555 describe pod metrics-server-57f55c9bc5-vq4p7: exit status 1 (63.571071ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vq4p7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-415555 describe pod metrics-server-57f55c9bc5-vq4p7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0919 17:52:56.282088   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215748 -n no-preload-215748
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:01:42.800361632 +0000 UTC m=+5235.732345673
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-215748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-215748 logs -n 25: (1.362253217s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-512928 -- sudo                         | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-512928                                 | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-367630                            | force-systemd-env-367630     | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC | 19 Sep 23 17:52 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100627        | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC | 19 Sep 23 17:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100627             | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC | 19 Sep 23 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:49:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:49:25.690379   47798 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:49:25.690666   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690680   47798 out.go:309] Setting ErrFile to fd 2...
	I0919 17:49:25.690688   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690866   47798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:49:25.691435   47798 out.go:303] Setting JSON to false
	I0919 17:49:25.692368   47798 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5516,"bootTime":1695140250,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:49:25.692468   47798 start.go:138] virtualization: kvm guest
	I0919 17:49:25.694628   47798 out.go:177] * [old-k8s-version-100627] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:49:25.696349   47798 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:49:25.696345   47798 notify.go:220] Checking for updates...
	I0919 17:49:25.697700   47798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:49:25.699081   47798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:49:25.700392   47798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:49:25.701684   47798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:49:25.704016   47798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:49:25.705911   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:49:25.706464   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.706525   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.722505   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0919 17:49:25.722936   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.723454   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.723479   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.723851   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.724042   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.726028   47798 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:49:25.727479   47798 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:49:25.727787   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.727829   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.743272   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0919 17:49:25.743700   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.744180   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.744206   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.744589   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.744775   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.781696   47798 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:49:25.783056   47798 start.go:298] selected driver: kvm2
	I0919 17:49:25.783069   47798 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.783172   47798 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:49:25.783797   47798 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.783868   47798 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:49:25.797796   47798 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:49:25.798190   47798 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:49:25.798229   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:49:25.798239   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:49:25.798254   47798 start_flags.go:321] config:
	{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.798391   47798 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.800110   47798 out.go:177] * Starting control plane node old-k8s-version-100627 in cluster old-k8s-version-100627
	I0919 17:49:25.801393   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:49:25.801433   47798 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 17:49:25.801447   47798 cache.go:57] Caching tarball of preloaded images
	I0919 17:49:25.801545   47798 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:49:25.801559   47798 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0919 17:49:25.801689   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:49:25.801924   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:49:25.801971   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 26.483µs
	I0919 17:49:25.801985   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:49:25.801989   47798 fix.go:54] fixHost starting: 
	I0919 17:49:25.802270   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.802300   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.816968   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0919 17:49:25.817484   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.818034   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.818069   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.818376   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.818564   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.818799   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:49:25.820610   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Running err=<nil>
	W0919 17:49:25.820646   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:49:25.822656   47798 out.go:177] * Updating the running kvm2 "old-k8s-version-100627" VM ...
	I0919 17:49:25.475965   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:27.476794   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:24.179260   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.686283   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.993419   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:28.995394   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:25.824024   47798 machine.go:88] provisioning docker machine ...
	I0919 17:49:25.824053   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.824279   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824480   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:49:25.824508   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824671   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:49:25.827416   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.827890   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:49:25.827920   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.828092   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:49:25.828287   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828490   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828642   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:49:25.828819   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:49:25.829172   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:49:25.829188   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:49:28.724736   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:29.976563   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.976829   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:29.180775   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.677584   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.678666   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.493348   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.495016   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.796651   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:33.977341   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.477521   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.178183   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:38.679802   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:35.495920   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.993770   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:39.994165   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.876662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:38.477642   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.977376   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:41.177699   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:43.178895   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:42.494311   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:44.494974   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.948690   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:43.476725   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.477936   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.977074   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.678443   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:48.178687   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:46.994529   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:49.494895   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.028682   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.100607   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.476569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.478246   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:50.179250   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.180827   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:51.994091   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.494911   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.480792   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.978326   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.678236   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.678493   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.678539   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.496729   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.993989   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:59.224657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:59.476603   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.477023   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:00.678913   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.178281   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.494409   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.993808   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:02.292662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:03.477796   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.976205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.180836   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.678312   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.994188   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.999270   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:08.372675   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:08.476522   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.976260   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:09.679568   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.179377   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.494291   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.995682   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:11.444679   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:13.476906   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.478193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.976583   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:14.679325   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:16.690040   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.496998   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.993599   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.993922   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.524614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.596688   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.476110   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.477330   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.184902   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:21.678830   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:23.679261   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.494626   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.993912   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.976379   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.976627   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.177309   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:28.179300   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:27.494133   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:29.494473   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.676677   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:29.748706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:28.976722   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.980716   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.678715   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.177789   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:31.993563   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.995728   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.476205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.975739   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.978115   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.178188   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.178328   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:36.493541   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:38.494380   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.832612   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:38.900652   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:40.476580   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.476989   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:39.180279   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:41.678338   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:43.678611   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:40.993785   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.994446   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:44.980626   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:44.976641   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.977032   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.178379   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.179405   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:45.494929   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:47.993704   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:49.995192   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.052702   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:48.977244   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:51.477325   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:50.678663   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:53.178707   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:52.493646   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.494478   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.132706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:53.477737   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.977429   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.978145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.678855   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.177724   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:56.993145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.994370   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.208643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:00.476193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.476286   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:00.178398   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.677951   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:01.501993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.993491   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.288721   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:04.476795   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.976387   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.177376   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:07.178224   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.995006   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:08.494405   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.360657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:08.977404   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.475407   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:09.178322   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.179143   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:13.180235   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:10.494521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.993993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.436681   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:15.508678   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:13.975736   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.977800   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.679181   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.177065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.494642   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:17.494846   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:19.993481   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.475821   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.476773   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.976145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.178065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.178249   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.993613   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:23.994655   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.588622   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.660703   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.976569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:27.476021   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:24.678762   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.682314   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.493981   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:28.494262   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.477183   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.976125   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.178390   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.178551   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:33.678277   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.495041   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:32.993120   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.740717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.816640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.977079   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.475678   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.179024   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:38.678508   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:35.495368   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:37.994521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:39.892631   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:38.476601   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.978279   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:41.178365   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:43.678896   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.493826   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.992893   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:44.993574   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.968646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:43.478156   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:45.976257   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.977272   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:46.178127   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:48.178192   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.494860   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.993714   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.044674   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:50.476391   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.976686   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:50.678434   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:53.177908   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:51.995140   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:54.494996   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.116699   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:54.977835   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.475875   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:55.178219   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.179598   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:56.992881   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.994100   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.200619   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:59.476340   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.975559   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:59.678336   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:00.158668   45961 pod_ready.go:81] duration metric: took 4m0.000408372s waiting for pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:00.158710   45961 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:00.158733   45961 pod_ready.go:38] duration metric: took 4m12.69690087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:00.158768   45961 kubeadm.go:640] restartCluster took 4m32.67884897s
	W0919 17:52:00.158862   45961 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:00.158899   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:00.995208   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:03.493604   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.272609   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:03.976776   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:06.478653   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:05.495181   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.995025   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.348614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:10.424641   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:08.170853   46282 pod_ready.go:81] duration metric: took 4m0.00010513s waiting for pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:08.170890   46282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:08.170903   46282 pod_ready.go:38] duration metric: took 4m5.202195097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:08.170929   46282 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:08.170960   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:08.171010   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:08.229465   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.229484   46282 cri.go:89] found id: ""
	I0919 17:52:08.229491   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:08.229537   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.234379   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:08.234434   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:08.280999   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:08.281033   46282 cri.go:89] found id: ""
	I0919 17:52:08.281044   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:08.281097   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.285499   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:08.285561   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:08.327387   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.327413   46282 cri.go:89] found id: ""
	I0919 17:52:08.327423   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:08.327481   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.333158   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:08.333235   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:08.375921   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.375946   46282 cri.go:89] found id: ""
	I0919 17:52:08.375955   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:08.376008   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.380156   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:08.380220   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:08.425586   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:08.425613   46282 cri.go:89] found id: ""
	I0919 17:52:08.425620   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:08.425676   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.430229   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:08.430302   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:08.482920   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:08.482946   46282 cri.go:89] found id: ""
	I0919 17:52:08.482956   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:08.483017   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.488497   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:08.488559   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:08.543405   46282 cri.go:89] found id: ""
	I0919 17:52:08.543432   46282 logs.go:284] 0 containers: []
	W0919 17:52:08.543441   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:08.543449   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:08.543510   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:08.588287   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:08.588309   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:08.588314   46282 cri.go:89] found id: ""
	I0919 17:52:08.588326   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:08.588390   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.592986   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.597223   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:08.597245   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:08.648372   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:08.648400   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:08.705158   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:08.705203   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.754475   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:08.754511   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.797571   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:08.797603   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:08.950578   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:08.950617   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.998529   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:08.998555   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:09.039415   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:09.039445   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:09.081622   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:09.081657   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:09.095239   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:09.095269   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:09.141402   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:09.141429   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:09.186918   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:09.186953   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:09.244473   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:09.244508   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:12.216337   46282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:12.232741   46282 api_server.go:72] duration metric: took 4m15.890515742s to wait for apiserver process to appear ...
	I0919 17:52:12.232764   46282 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:12.232793   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:12.232844   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:12.279741   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:12.279769   46282 cri.go:89] found id: ""
	I0919 17:52:12.279780   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:12.279836   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.284490   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:12.284560   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:12.322547   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:12.322575   46282 cri.go:89] found id: ""
	I0919 17:52:12.322585   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:12.322648   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.326924   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:12.326981   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:12.376181   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:12.376201   46282 cri.go:89] found id: ""
	I0919 17:52:12.376208   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:12.376259   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.380831   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:12.380892   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:12.422001   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.422035   46282 cri.go:89] found id: ""
	I0919 17:52:12.422045   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:12.422112   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.426372   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:12.426456   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:12.474718   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:12.474739   46282 cri.go:89] found id: ""
	I0919 17:52:12.474749   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:12.474804   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.479781   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:12.479837   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:12.525008   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:12.525038   46282 cri.go:89] found id: ""
	I0919 17:52:12.525047   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:12.525106   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.529414   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:12.529480   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:12.573369   46282 cri.go:89] found id: ""
	I0919 17:52:12.573395   46282 logs.go:284] 0 containers: []
	W0919 17:52:12.573403   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:12.573410   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:12.573461   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:12.618041   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:12.618063   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:12.618067   46282 cri.go:89] found id: ""
	I0919 17:52:12.618074   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:12.618118   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.622248   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.626519   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:12.626537   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.667023   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:12.667052   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:13.123963   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:13.123996   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:10.495145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:12.994448   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:13.243498   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:13.243533   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:13.289172   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:13.289208   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:13.325853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:13.325883   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:13.363915   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:13.363943   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.412359   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:13.412394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:13.458675   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:13.458706   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:13.473516   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:13.473549   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:13.538694   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:13.538723   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:13.606826   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:13.606871   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:13.652363   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:13.652394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.204482   46282 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8444/healthz ...
	I0919 17:52:16.210733   46282 api_server.go:279] https://192.168.61.228:8444/healthz returned 200:
	ok
	I0919 17:52:16.212054   46282 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:16.212076   46282 api_server.go:131] duration metric: took 3.979306376s to wait for apiserver health ...
	I0919 17:52:16.212085   46282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:16.212106   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:16.212148   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:16.263882   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:16.263908   46282 cri.go:89] found id: ""
	I0919 17:52:16.263918   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:16.263978   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.268238   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:16.268291   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:16.309480   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.309504   46282 cri.go:89] found id: ""
	I0919 17:52:16.309511   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:16.309560   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.313860   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:16.313910   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:16.353715   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:16.353741   46282 cri.go:89] found id: ""
	I0919 17:52:16.353751   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:16.353812   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.358128   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:16.358194   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:16.398792   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.398811   46282 cri.go:89] found id: ""
	I0919 17:52:16.398818   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:16.398865   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.403410   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:16.403463   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:16.449884   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.449910   46282 cri.go:89] found id: ""
	I0919 17:52:16.449924   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:16.449966   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.454404   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:16.454462   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:16.500246   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:16.500265   46282 cri.go:89] found id: ""
	I0919 17:52:16.500274   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:16.500328   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.504468   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:16.504531   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:16.545865   46282 cri.go:89] found id: ""
	I0919 17:52:16.545888   46282 logs.go:284] 0 containers: []
	W0919 17:52:16.545895   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:16.545900   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:16.545953   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:16.584533   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.584560   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.584565   46282 cri.go:89] found id: ""
	I0919 17:52:16.584571   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:16.584619   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.588723   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.592429   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:16.592459   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:16.643853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:16.643884   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.693660   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:16.693697   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:16.710833   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:16.710860   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.769518   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:16.769548   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.819614   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:16.819645   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.860112   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:16.860154   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:16.918657   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:16.918687   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.962381   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:16.962412   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:17.304580   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:17.304618   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:17.449337   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:17.449368   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:17.522234   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:17.522268   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:17.581061   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:17.581093   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.986517   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.82758933s)
	I0919 17:52:13.986593   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:14.002396   45961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:14.012005   45961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:14.020952   45961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:14.021075   45961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:14.249350   45961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:20.161795   46282 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:20.161825   46282 system_pods.go:61] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.161833   46282 system_pods.go:61] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.161840   46282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.161845   46282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.161850   46282 system_pods.go:61] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.161856   46282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.161866   46282 system_pods.go:61] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.161876   46282 system_pods.go:61] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.161885   46282 system_pods.go:74] duration metric: took 3.949793054s to wait for pod list to return data ...
	I0919 17:52:20.161895   46282 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:20.165017   46282 default_sa.go:45] found service account: "default"
	I0919 17:52:20.165041   46282 default_sa.go:55] duration metric: took 3.138746ms for default service account to be created ...
	I0919 17:52:20.165051   46282 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:20.171771   46282 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:20.171798   46282 system_pods.go:89] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.171807   46282 system_pods.go:89] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.171815   46282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.171823   46282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.171841   46282 system_pods.go:89] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.171847   46282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.171858   46282 system_pods.go:89] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.171867   46282 system_pods.go:89] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.171879   46282 system_pods.go:126] duration metric: took 6.820805ms to wait for k8s-apps to be running ...
	I0919 17:52:20.171891   46282 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:20.171944   46282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:20.191948   46282 system_svc.go:56] duration metric: took 20.046863ms WaitForService to wait for kubelet.
	I0919 17:52:20.191977   46282 kubeadm.go:581] duration metric: took 4m23.849755591s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:20.192003   46282 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:20.198066   46282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:20.198090   46282 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:20.198101   46282 node_conditions.go:105] duration metric: took 6.093464ms to run NodePressure ...
	I0919 17:52:20.198113   46282 start.go:228] waiting for startup goroutines ...
	I0919 17:52:20.198122   46282 start.go:233] waiting for cluster config update ...
	I0919 17:52:20.198131   46282 start.go:242] writing updated cluster config ...
	I0919 17:52:20.198390   46282 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:20.260334   46282 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:20.262660   46282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-415555" cluster and "default" namespace by default
	I0919 17:52:15.493238   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:17.495147   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:19.497990   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:16.500634   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:19.572697   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.436229   45961 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:25.436332   45961 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:25.436448   45961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:25.436580   45961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:25.436693   45961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:25.436784   45961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:25.438740   45961 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:25.438831   45961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:25.438907   45961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:25.439035   45961 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:25.439117   45961 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:25.439225   45961 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:25.439306   45961 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:25.439378   45961 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:25.439455   45961 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:25.439554   45961 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:25.439646   45961 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:25.439692   45961 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:25.439759   45961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:25.439825   45961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:25.439892   45961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:25.439982   45961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:25.440068   45961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:25.440183   45961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:25.440276   45961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:25.441897   45961 out.go:204]   - Booting up control plane ...
	I0919 17:52:25.442005   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:25.442103   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:25.442163   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:25.442248   45961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:25.442343   45961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:25.442428   45961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:25.442641   45961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:25.442703   45961 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003935 seconds
	I0919 17:52:25.442819   45961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:25.442911   45961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:25.442959   45961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:25.443101   45961 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-215748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:25.443144   45961 kubeadm.go:322] [bootstrap-token] Using token: xzx8bb.31rxl0d2e5l1asvj
	I0919 17:52:25.444479   45961 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:25.444574   45961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:25.444640   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:25.444747   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:25.444886   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:25.445049   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:25.445178   45961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:25.445344   45961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:25.445403   45961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:25.445462   45961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:25.445475   45961 kubeadm.go:322] 
	I0919 17:52:25.445558   45961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:25.445569   45961 kubeadm.go:322] 
	I0919 17:52:25.445659   45961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:25.445672   45961 kubeadm.go:322] 
	I0919 17:52:25.445691   45961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:25.445740   45961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:25.445779   45961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:25.445785   45961 kubeadm.go:322] 
	I0919 17:52:25.445824   45961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:25.445830   45961 kubeadm.go:322] 
	I0919 17:52:25.445873   45961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:25.445879   45961 kubeadm.go:322] 
	I0919 17:52:25.445939   45961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:25.446038   45961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:25.446154   45961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:25.446172   45961 kubeadm.go:322] 
	I0919 17:52:25.446275   45961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:25.446361   45961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:25.446371   45961 kubeadm.go:322] 
	I0919 17:52:25.446473   45961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.446594   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:25.446623   45961 kubeadm.go:322] 	--control-plane 
	I0919 17:52:25.446641   45961 kubeadm.go:322] 
	I0919 17:52:25.446774   45961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:25.446782   45961 kubeadm.go:322] 
	I0919 17:52:25.446874   45961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.447044   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:25.447066   45961 cni.go:84] Creating CNI manager for ""
	I0919 17:52:25.447079   45961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:25.448742   45961 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:21.994034   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:24.494339   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:25.656705   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.450147   45961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:25.473476   45961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:25.529295   45961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:25.529383   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.529387   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=no-preload-215748 minikube.k8s.io/updated_at=2023_09_19T17_52_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.625308   45961 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:25.905954   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.037543   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.638479   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.138484   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.637901   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.138033   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.638787   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.494798   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:28.213192   45696 pod_ready.go:81] duration metric: took 4m0.001033854s waiting for pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:28.213226   45696 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:28.213243   45696 pod_ready.go:38] duration metric: took 4m12.067034727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:28.213266   45696 kubeadm.go:640] restartCluster took 4m32.254857032s
	W0919 17:52:28.213338   45696 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:28.213378   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:28.728646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:29.138616   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:29.638381   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.138155   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.637984   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.137977   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.638547   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.138617   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.638253   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.138335   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.638302   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.804640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:34.138702   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.638549   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.138431   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.638642   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.138000   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.638726   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.138394   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.315805   45961 kubeadm.go:1081] duration metric: took 11.786488266s to wait for elevateKubeSystemPrivileges.
	I0919 17:52:37.315840   45961 kubeadm.go:406] StartCluster complete in 5m9.899215362s
	I0919 17:52:37.315856   45961 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.315945   45961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:52:37.317563   45961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.317815   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:52:37.317844   45961 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:52:37.317936   45961 addons.go:69] Setting storage-provisioner=true in profile "no-preload-215748"
	I0919 17:52:37.317943   45961 addons.go:69] Setting default-storageclass=true in profile "no-preload-215748"
	I0919 17:52:37.317959   45961 addons.go:231] Setting addon storage-provisioner=true in "no-preload-215748"
	I0919 17:52:37.317963   45961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-215748"
	W0919 17:52:37.317967   45961 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:52:37.317964   45961 addons.go:69] Setting metrics-server=true in profile "no-preload-215748"
	I0919 17:52:37.317988   45961 addons.go:231] Setting addon metrics-server=true in "no-preload-215748"
	W0919 17:52:37.318000   45961 addons.go:240] addon metrics-server should already be in state true
	I0919 17:52:37.318016   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318041   45961 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:52:37.318051   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318380   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318407   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318416   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318429   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318475   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318495   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.334365   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0919 17:52:37.334822   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.335368   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.335395   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.335861   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.336052   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0919 17:52:37.337998   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338047   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338480   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338498   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338610   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338632   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338840   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.338941   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.339461   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339490   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.339536   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339565   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.354064   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
	I0919 17:52:37.354482   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.354893   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.354912   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.355353   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.355578   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.357181   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.359063   45961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:52:37.357674   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0919 17:52:37.358308   45961 addons.go:231] Setting addon default-storageclass=true in "no-preload-215748"
	W0919 17:52:37.360428   45961 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:52:37.360461   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.360569   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:52:37.360583   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:52:37.360602   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.360832   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.360869   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.360891   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.361393   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.361411   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.361836   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.362040   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.363959   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.364124   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.365928   45961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:52:37.364551   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.364765   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.367579   45961 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.367592   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:52:37.367609   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.367639   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.367660   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.367827   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.368140   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.370800   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371215   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.371240   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371416   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.371612   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.371777   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.371914   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.379222   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0919 17:52:37.379631   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.380097   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.380122   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.380481   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.381718   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.381754   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.396647   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0919 17:52:37.397058   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.397474   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.397492   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.397842   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.397994   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.399762   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.400224   45961 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.400239   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:52:37.400255   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.403299   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403745   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.403767   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.403773   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403948   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.404080   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.404221   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.448139   45961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-215748" context rescaled to 1 replicas
	I0919 17:52:37.448183   45961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:52:37.450076   45961 out.go:177] * Verifying Kubernetes components...
	I0919 17:52:37.451036   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:37.579553   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.592116   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.604757   45961 node_ready.go:35] waiting up to 6m0s for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.605235   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:52:37.611496   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:52:37.611523   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:52:37.625762   45961 node_ready.go:49] node "no-preload-215748" has status "Ready":"True"
	I0919 17:52:37.625782   45961 node_ready.go:38] duration metric: took 20.997061ms waiting for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.625790   45961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:37.638366   45961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.693993   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:52:37.694019   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:52:37.754746   45961 pod_ready.go:92] pod "etcd-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.754769   45961 pod_ready.go:81] duration metric: took 116.377819ms waiting for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.754782   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.798115   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:37.798139   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:52:37.815124   45961 pod_ready.go:92] pod "kube-apiserver-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.815192   45961 pod_ready.go:81] duration metric: took 60.393176ms waiting for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.815218   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.922999   45961 pod_ready.go:92] pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.923022   45961 pod_ready.go:81] duration metric: took 107.794672ms waiting for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.923038   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.995437   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:39.961838   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.382243112s)
	I0919 17:52:39.961884   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961893   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.961902   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.356635779s)
	I0919 17:52:39.961928   45961 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 17:52:39.961843   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.369699378s)
	I0919 17:52:39.961953   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961963   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962202   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962219   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962231   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962239   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962348   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962409   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962447   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962490   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962517   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962540   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962553   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962563   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962526   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962601   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962778   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962819   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962828   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962942   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962959   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962972   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064135   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.06864457s)
	I0919 17:52:40.064196   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064212   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064511   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064532   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064542   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064552   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064775   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064835   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064840   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064850   45961 addons.go:467] Verifying addon metrics-server=true in "no-preload-215748"
	I0919 17:52:40.066741   45961 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0919 17:52:37.876720   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:40.068231   45961 addons.go:502] enable addons completed in 2.750388313s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0919 17:52:40.249105   45961 pod_ready.go:102] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:40.760507   45961 pod_ready.go:92] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.760532   45961 pod_ready.go:81] duration metric: took 2.837485326s waiting for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.760546   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770519   45961 pod_ready.go:92] pod "kube-scheduler-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.770574   45961 pod_ready.go:81] duration metric: took 9.988955ms waiting for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770610   45961 pod_ready.go:38] duration metric: took 3.144808421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:40.770630   45961 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:40.770686   45961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:40.806513   45961 api_server.go:72] duration metric: took 3.358300901s to wait for apiserver process to appear ...
	I0919 17:52:40.806538   45961 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:40.806556   45961 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0919 17:52:40.812758   45961 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0919 17:52:40.813960   45961 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:40.813985   45961 api_server.go:131] duration metric: took 7.436946ms to wait for apiserver health ...
	I0919 17:52:40.813996   45961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:40.821498   45961 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:40.821525   45961 system_pods.go:61] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:40.821536   45961 system_pods.go:61] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:40.821543   45961 system_pods.go:61] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:40.821549   45961 system_pods.go:61] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:40.821555   45961 system_pods.go:61] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:40.821563   45961 system_pods.go:61] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:40.821572   45961 system_pods.go:61] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:40.821583   45961 system_pods.go:61] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:40.821599   45961 system_pods.go:74] duration metric: took 7.595377ms to wait for pod list to return data ...
	I0919 17:52:40.821608   45961 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:40.828423   45961 default_sa.go:45] found service account: "default"
	I0919 17:52:40.828446   45961 default_sa.go:55] duration metric: took 6.830774ms for default service account to be created ...
	I0919 17:52:40.828455   45961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:41.018524   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.018560   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.018569   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.018578   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.018585   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.018591   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.018601   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.018612   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.018625   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.018645   45961 retry.go:31] will retry after 307.254812ms: missing components: kube-dns
	I0919 17:52:41.337815   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.337844   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.337851   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.337856   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.337863   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.337869   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.337875   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.337883   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.337893   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.337915   45961 retry.go:31] will retry after 378.465105ms: missing components: kube-dns
	I0919 17:52:41.734680   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.734717   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.734728   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.734736   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.734743   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.734750   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.734757   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.734765   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.734780   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.734801   45961 retry.go:31] will retry after 432.849904ms: missing components: kube-dns
	I0919 17:52:42.176510   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:42.176536   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Running
	I0919 17:52:42.176545   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:42.176552   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:42.176559   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:42.176569   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:42.176576   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:42.176590   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:42.176603   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Running
	I0919 17:52:42.176616   45961 system_pods.go:126] duration metric: took 1.348155168s to wait for k8s-apps to be running ...
	I0919 17:52:42.176628   45961 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:42.176683   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:42.189952   45961 system_svc.go:56] duration metric: took 13.312874ms WaitForService to wait for kubelet.
	I0919 17:52:42.189981   45961 kubeadm.go:581] duration metric: took 4.741777133s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:42.190012   45961 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:42.194919   45961 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:42.194945   45961 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:42.194957   45961 node_conditions.go:105] duration metric: took 4.939533ms to run NodePressure ...
	I0919 17:52:42.194969   45961 start.go:228] waiting for startup goroutines ...
	I0919 17:52:42.194978   45961 start.go:233] waiting for cluster config update ...
	I0919 17:52:42.194988   45961 start.go:242] writing updated cluster config ...
	I0919 17:52:42.195287   45961 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:42.245669   45961 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:42.248021   45961 out.go:177] * Done! kubectl is now configured to use "no-preload-215748" cluster and "default" namespace by default
	I0919 17:52:41.936906   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.723493225s)
	I0919 17:52:41.936983   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:41.951451   45696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:41.960478   45696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:41.968960   45696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:41.969031   45696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:42.019868   45696 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:42.020027   45696 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:42.171083   45696 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:42.171221   45696 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:42.171332   45696 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:42.429760   45696 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:42.431619   45696 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:42.431770   45696 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:42.431870   45696 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:42.431973   45696 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:42.432172   45696 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:42.432781   45696 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:42.433451   45696 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:42.434353   45696 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:42.435577   45696 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:42.436820   45696 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:42.438302   45696 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:42.439391   45696 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:42.439509   45696 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:42.929570   45696 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:43.332709   45696 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:43.433651   45696 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:43.695104   45696 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:43.696103   45696 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:43.699874   45696 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:43.701784   45696 out.go:204]   - Booting up control plane ...
	I0919 17:52:43.701926   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:43.702063   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:43.702819   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:43.724659   45696 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:43.725576   45696 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:43.725671   45696 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:43.851582   45696 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:43.960637   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:47.032663   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:51.355564   45696 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504191 seconds
	I0919 17:52:51.355695   45696 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:51.376627   45696 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:51.908759   45696 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:51.909064   45696 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-415155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:52.424367   45696 kubeadm.go:322] [bootstrap-token] Using token: kntdz4.46i9d2q57hx70gnb
	I0919 17:52:52.425876   45696 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:52.425993   45696 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:52.433647   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:52.443514   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:52.447239   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:52.453258   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:52.459432   45696 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:52.475208   45696 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:52.722848   45696 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:52.841255   45696 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:52.841280   45696 kubeadm.go:322] 
	I0919 17:52:52.841356   45696 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:52.841369   45696 kubeadm.go:322] 
	I0919 17:52:52.841456   45696 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:52.841464   45696 kubeadm.go:322] 
	I0919 17:52:52.841502   45696 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:52.841568   45696 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:52.841637   45696 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:52.841648   45696 kubeadm.go:322] 
	I0919 17:52:52.841698   45696 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:52.841704   45696 kubeadm.go:322] 
	I0919 17:52:52.841745   45696 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:52.841780   45696 kubeadm.go:322] 
	I0919 17:52:52.841875   45696 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:52.841942   45696 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:52.842039   45696 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:52.842048   45696 kubeadm.go:322] 
	I0919 17:52:52.842134   45696 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:52.842243   45696 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:52.842262   45696 kubeadm.go:322] 
	I0919 17:52:52.842358   45696 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842491   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:52.842523   45696 kubeadm.go:322] 	--control-plane 
	I0919 17:52:52.842530   45696 kubeadm.go:322] 
	I0919 17:52:52.842645   45696 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:52.842659   45696 kubeadm.go:322] 
	I0919 17:52:52.842773   45696 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842930   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:52.844420   45696 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:52.844450   45696 cni.go:84] Creating CNI manager for ""
	I0919 17:52:52.844461   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:52.846322   45696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:52.848269   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:52.875578   45696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:52.905183   45696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:52.905261   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.905281   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=embed-certs-415155 minikube.k8s.io/updated_at=2023_09_19T17_52_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.993717   45696 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:53.208727   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.311165   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.904182   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.403711   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.904152   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:55.404377   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.108640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:55.903772   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.404320   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.904201   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.403637   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.904174   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.404553   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.903691   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.403716   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.903872   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:00.403725   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.180664   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:00.904540   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.404211   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.903897   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.403857   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.903841   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.404601   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.904222   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.404483   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.903813   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:05.404474   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.260629   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.332731   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.904337   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:06.003333   45696 kubeadm.go:1081] duration metric: took 13.098131801s to wait for elevateKubeSystemPrivileges.
	I0919 17:53:06.003365   45696 kubeadm.go:406] StartCluster complete in 5m10.10389936s
	I0919 17:53:06.003387   45696 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.003476   45696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:53:06.005541   45696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.005772   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:53:06.005785   45696 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:53:06.005854   45696 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-415155"
	I0919 17:53:06.005877   45696 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-415155"
	W0919 17:53:06.005884   45696 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:53:06.005926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.005930   45696 addons.go:69] Setting default-storageclass=true in profile "embed-certs-415155"
	I0919 17:53:06.005946   45696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-415155"
	I0919 17:53:06.005979   45696 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:53:06.005982   45696 addons.go:69] Setting metrics-server=true in profile "embed-certs-415155"
	I0919 17:53:06.006009   45696 addons.go:231] Setting addon metrics-server=true in "embed-certs-415155"
	W0919 17:53:06.006026   45696 addons.go:240] addon metrics-server should already be in state true
	I0919 17:53:06.006071   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.006331   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006328   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006364   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006396   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006451   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006493   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.023141   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43557
	I0919 17:53:06.023485   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0919 17:53:06.023646   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0919 17:53:06.023657   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.023882   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024040   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024209   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024230   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024333   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024358   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024616   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024697   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024810   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024827   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.025260   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.025301   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.025486   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.025695   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.026032   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.026062   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.044712   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I0919 17:53:06.045176   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.045627   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.045646   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.045976   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.046161   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.047603   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.049519   45696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:53:06.047878   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0919 17:53:06.052909   45696 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.052922   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:53:06.052937   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.053277   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.053868   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.053887   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.054337   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.054580   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.056666   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.056710   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.058604   45696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:53:06.057084   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.057313   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.060027   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.060046   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:53:06.060060   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:53:06.060079   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.060210   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.060497   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.060815   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.062794   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063165   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.063196   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063327   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.063475   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.063593   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.063701   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.066891   45696 addons.go:231] Setting addon default-storageclass=true in "embed-certs-415155"
	W0919 17:53:06.066905   45696 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:53:06.066926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.066965   45696 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-415155" context rescaled to 1 replicas
	I0919 17:53:06.066987   45696 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.6 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:53:06.068622   45696 out.go:177] * Verifying Kubernetes components...
	I0919 17:53:06.067176   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.070241   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.070253   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:06.085010   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0919 17:53:06.085392   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.085940   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.085976   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.086322   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.086774   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.086820   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.101494   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0919 17:53:06.101938   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.102528   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.102552   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.103014   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.103256   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.104793   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.105087   45696 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.105107   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:53:06.105127   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.107742   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108073   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.108105   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108336   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.108547   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.108744   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.108908   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.205454   45696 node_ready.go:35] waiting up to 6m0s for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.205565   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:53:06.225929   45696 node_ready.go:49] node "embed-certs-415155" has status "Ready":"True"
	I0919 17:53:06.225949   45696 node_ready.go:38] duration metric: took 20.464817ms waiting for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.225957   45696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:06.251954   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:53:06.251981   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:53:06.269198   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.296923   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.314108   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:53:06.314141   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:53:06.338106   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:06.378123   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:06.378154   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:53:06.492313   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:08.235564   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.029959877s)
	I0919 17:53:08.235599   45696 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0919 17:53:08.597917   45696 pod_ready.go:102] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"False"
	I0919 17:53:08.741920   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.44495643s)
	I0919 17:53:08.741982   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.741995   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.741926   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.472691573s)
	I0919 17:53:08.742031   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742050   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742377   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742393   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742403   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742413   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742492   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.742542   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742555   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742566   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742576   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742617   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742630   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742643   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742651   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742771   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742785   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.744274   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.744297   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818418   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.326058126s)
	I0919 17:53:08.818472   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818486   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.818839   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.818891   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.818927   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818938   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818948   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.820442   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.820464   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.820474   45696 addons.go:467] Verifying addon metrics-server=true in "embed-certs-415155"
	I0919 17:53:08.820479   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.822508   45696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 17:53:08.824220   45696 addons.go:502] enable addons completed in 2.818433307s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 17:53:10.561437   45696 pod_ready.go:92] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.561462   45696 pod_ready.go:81] duration metric: took 4.223330172s waiting for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.561472   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568541   45696 pod_ready.go:92] pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.568566   45696 pod_ready.go:81] duration metric: took 7.086927ms waiting for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568579   45696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577684   45696 pod_ready.go:92] pod "etcd-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.577709   45696 pod_ready.go:81] duration metric: took 9.120912ms waiting for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577722   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585005   45696 pod_ready.go:92] pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.585033   45696 pod_ready.go:81] duration metric: took 7.302173ms waiting for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585043   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590934   45696 pod_ready.go:92] pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.590951   45696 pod_ready.go:81] duration metric: took 5.90203ms waiting for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590960   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358510   45696 pod_ready.go:92] pod "kube-proxy-b75j2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.358535   45696 pod_ready.go:81] duration metric: took 767.569086ms waiting for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358544   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759839   45696 pod_ready.go:92] pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.759863   45696 pod_ready.go:81] duration metric: took 401.313058ms waiting for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759872   45696 pod_ready.go:38] duration metric: took 5.533896789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:11.759887   45696 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:53:11.759933   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:53:11.773700   45696 api_server.go:72] duration metric: took 5.706687251s to wait for apiserver process to appear ...
	I0919 17:53:11.773730   45696 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:53:11.773747   45696 api_server.go:253] Checking apiserver healthz at https://192.168.50.6:8443/healthz ...
	I0919 17:53:11.784435   45696 api_server.go:279] https://192.168.50.6:8443/healthz returned 200:
	ok
	I0919 17:53:11.785929   45696 api_server.go:141] control plane version: v1.28.2
	I0919 17:53:11.785952   45696 api_server.go:131] duration metric: took 12.214361ms to wait for apiserver health ...
	I0919 17:53:11.785971   45696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:53:11.961906   45696 system_pods.go:59] 9 kube-system pods found
	I0919 17:53:11.961937   45696 system_pods.go:61] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:11.961945   45696 system_pods.go:61] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:11.961952   45696 system_pods.go:61] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:11.961959   45696 system_pods.go:61] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:11.961967   45696 system_pods.go:61] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:11.961973   45696 system_pods.go:61] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:11.961981   45696 system_pods.go:61] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:11.961991   45696 system_pods.go:61] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:11.962003   45696 system_pods.go:61] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:11.962013   45696 system_pods.go:74] duration metric: took 176.035985ms to wait for pod list to return data ...
	I0919 17:53:11.962027   45696 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:53:12.157305   45696 default_sa.go:45] found service account: "default"
	I0919 17:53:12.157328   45696 default_sa.go:55] duration metric: took 195.295342ms for default service account to be created ...
	I0919 17:53:12.157336   45696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:53:12.359884   45696 system_pods.go:86] 9 kube-system pods found
	I0919 17:53:12.359910   45696 system_pods.go:89] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:12.359916   45696 system_pods.go:89] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:12.359920   45696 system_pods.go:89] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:12.359924   45696 system_pods.go:89] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:12.359929   45696 system_pods.go:89] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:12.359932   45696 system_pods.go:89] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:12.359936   45696 system_pods.go:89] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:12.359943   45696 system_pods.go:89] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:12.359948   45696 system_pods.go:89] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:12.359956   45696 system_pods.go:126] duration metric: took 202.614357ms to wait for k8s-apps to be running ...
	I0919 17:53:12.359962   45696 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:53:12.359999   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:12.373545   45696 system_svc.go:56] duration metric: took 13.572497ms WaitForService to wait for kubelet.
	I0919 17:53:12.373579   45696 kubeadm.go:581] duration metric: took 6.30657382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:53:12.373607   45696 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:53:12.557409   45696 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:53:12.557435   45696 node_conditions.go:123] node cpu capacity is 2
	I0919 17:53:12.557444   45696 node_conditions.go:105] duration metric: took 183.83246ms to run NodePressure ...
	I0919 17:53:12.557455   45696 start.go:228] waiting for startup goroutines ...
	I0919 17:53:12.557461   45696 start.go:233] waiting for cluster config update ...
	I0919 17:53:12.557469   45696 start.go:242] writing updated cluster config ...
	I0919 17:53:12.557699   45696 ssh_runner.go:195] Run: rm -f paused
	I0919 17:53:12.605145   45696 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:53:12.607197   45696 out.go:177] * Done! kubectl is now configured to use "embed-certs-415155" cluster and "default" namespace by default
	I0919 17:53:11.412630   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:14.488732   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:20.564623   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:23.636680   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:29.716717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:32.788701   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:38.868669   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:41.940647   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:48.020643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:51.092656   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:57.172691   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:00.244719   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:03.245602   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:03.245640   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:03.247321   47798 machine.go:91] provisioned docker machine in 4m37.423277683s
	I0919 17:54:03.247365   47798 fix.go:56] fixHost completed within 4m37.445374366s
	I0919 17:54:03.247373   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 4m37.445391375s
	W0919 17:54:03.247389   47798 start.go:688] error starting host: provision: host is not running
	W0919 17:54:03.247488   47798 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0919 17:54:03.247503   47798 start.go:703] Will try again in 5 seconds ...
	I0919 17:54:08.249214   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:54:08.249335   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 79.973µs
	I0919 17:54:08.249367   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:54:08.249377   47798 fix.go:54] fixHost starting: 
	I0919 17:54:08.249707   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:54:08.249734   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:54:08.264866   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I0919 17:54:08.265315   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:54:08.265726   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:54:08.265759   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:54:08.266072   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:54:08.266269   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:08.266419   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:54:08.267941   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Stopped err=<nil>
	I0919 17:54:08.267960   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	W0919 17:54:08.268118   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:54:08.269915   47798 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-100627" ...
	I0919 17:54:08.271210   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Start
	I0919 17:54:08.271445   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring networks are active...
	I0919 17:54:08.272016   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network default is active
	I0919 17:54:08.272329   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network mk-old-k8s-version-100627 is active
	I0919 17:54:08.272743   47798 main.go:141] libmachine: (old-k8s-version-100627) Getting domain xml...
	I0919 17:54:08.273350   47798 main.go:141] libmachine: (old-k8s-version-100627) Creating domain...
	I0919 17:54:09.557879   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting to get IP...
	I0919 17:54:09.558718   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.559190   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.559270   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.559173   48693 retry.go:31] will retry after 309.613104ms: waiting for machine to come up
	I0919 17:54:09.870868   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.871472   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.871496   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.871435   48693 retry.go:31] will retry after 375.744574ms: waiting for machine to come up
	I0919 17:54:10.249255   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.249750   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.249780   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.249702   48693 retry.go:31] will retry after 305.257713ms: waiting for machine to come up
	I0919 17:54:10.556042   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.556587   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.556621   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.556510   48693 retry.go:31] will retry after 394.207165ms: waiting for machine to come up
	I0919 17:54:10.952178   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.952797   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.952828   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.952732   48693 retry.go:31] will retry after 706.704251ms: waiting for machine to come up
	I0919 17:54:11.660566   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:11.661038   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:11.661061   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:11.660988   48693 retry.go:31] will retry after 924.155076ms: waiting for machine to come up
	I0919 17:54:12.586278   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:12.586772   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:12.586805   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:12.586721   48693 retry.go:31] will retry after 1.035300526s: waiting for machine to come up
	I0919 17:54:13.623123   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:13.623597   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:13.623622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:13.623562   48693 retry.go:31] will retry after 1.060639157s: waiting for machine to come up
	I0919 17:54:14.685531   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:14.686012   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:14.686044   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:14.685973   48693 retry.go:31] will retry after 1.61320677s: waiting for machine to come up
	I0919 17:54:16.301447   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:16.301908   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:16.301957   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:16.301864   48693 retry.go:31] will retry after 2.031293541s: waiting for machine to come up
	I0919 17:54:18.334791   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:18.335384   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:18.335440   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:18.335329   48693 retry.go:31] will retry after 1.861837572s: waiting for machine to come up
	I0919 17:54:20.199546   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:20.200058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:20.200088   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:20.200009   48693 retry.go:31] will retry after 2.332364238s: waiting for machine to come up
	I0919 17:54:22.533654   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:22.534131   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:22.534162   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:22.534071   48693 retry.go:31] will retry after 4.475201998s: waiting for machine to come up
	I0919 17:54:27.013553   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014052   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has current primary IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014075   47798 main.go:141] libmachine: (old-k8s-version-100627) Found IP for machine: 192.168.72.182
	I0919 17:54:27.014091   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserving static IP address...
	I0919 17:54:27.014512   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.014535   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | skip adding static IP to network mk-old-k8s-version-100627 - found existing host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"}
	I0919 17:54:27.014560   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserved static IP address: 192.168.72.182
	I0919 17:54:27.014579   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting for SSH to be available...
	I0919 17:54:27.014592   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Getting to WaitForSSH function...
	I0919 17:54:27.016929   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017394   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.017431   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017594   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH client type: external
	I0919 17:54:27.017634   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa (-rw-------)
	I0919 17:54:27.017678   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:54:27.017700   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | About to run SSH command:
	I0919 17:54:27.017711   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | exit 0
	I0919 17:54:27.112557   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | SSH cmd err, output: <nil>: 
	I0919 17:54:27.112933   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:54:27.113574   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.116176   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116556   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.116581   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116841   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:54:27.117019   47798 machine.go:88] provisioning docker machine ...
	I0919 17:54:27.117036   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:27.117261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117429   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:54:27.117447   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117599   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.119667   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.119987   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.120020   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.120131   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.120278   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120442   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120625   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.120795   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.121114   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.121128   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:54:27.264601   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100627
	
	I0919 17:54:27.264628   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.267433   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.267871   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.267906   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.268044   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.268260   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268459   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268589   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.268764   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.269227   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.269258   47798 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-100627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-100627/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-100627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:54:27.408513   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:27.408544   47798 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:54:27.408566   47798 buildroot.go:174] setting up certificates
	I0919 17:54:27.408590   47798 provision.go:83] configureAuth start
	I0919 17:54:27.408607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.408923   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.411896   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412345   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.412376   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412595   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.414909   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415293   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.415331   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415417   47798 provision.go:138] copyHostCerts
	I0919 17:54:27.415479   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:54:27.415491   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:54:27.415556   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:54:27.415662   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:54:27.415675   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:54:27.415721   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:54:27.415941   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:54:27.415954   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:54:27.415990   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:54:27.416043   47798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-100627 san=[192.168.72.182 192.168.72.182 localhost 127.0.0.1 minikube old-k8s-version-100627]
	I0919 17:54:27.473903   47798 provision.go:172] copyRemoteCerts
	I0919 17:54:27.473953   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:54:27.473978   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.476857   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477234   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.477272   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.477649   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.477818   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.477957   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:27.578694   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:54:27.603580   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:54:27.629314   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:54:27.653764   47798 provision.go:86] duration metric: configureAuth took 245.159127ms
	I0919 17:54:27.653788   47798 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:54:27.653989   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:54:27.654081   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.656608   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.657113   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657286   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.657453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657605   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657785   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.657972   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.658276   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.658292   47798 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:54:28.000190   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:54:28.000238   47798 machine.go:91] provisioned docker machine in 883.206741ms
	I0919 17:54:28.000251   47798 start.go:300] post-start starting for "old-k8s-version-100627" (driver="kvm2")
	I0919 17:54:28.000265   47798 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:54:28.000288   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.000617   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:54:28.000650   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.003541   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.003980   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.004027   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.004182   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.004383   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.004583   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.004749   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.099219   47798 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:54:28.103738   47798 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:54:28.103766   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:54:28.103853   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:54:28.103953   47798 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:54:28.104066   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:54:28.115827   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:28.139080   47798 start.go:303] post-start completed in 138.802144ms
	I0919 17:54:28.139102   47798 fix.go:56] fixHost completed within 19.88972528s
	I0919 17:54:28.139121   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.141760   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142169   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.142195   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142396   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.142607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142726   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142917   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.143114   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:28.143573   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:28.143592   47798 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:54:28.277495   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695146068.223192427
	
	I0919 17:54:28.277520   47798 fix.go:206] guest clock: 1695146068.223192427
	I0919 17:54:28.277530   47798 fix.go:219] Guest: 2023-09-19 17:54:28.223192427 +0000 UTC Remote: 2023-09-19 17:54:28.139105122 +0000 UTC m=+302.480491248 (delta=84.087305ms)
	I0919 17:54:28.277553   47798 fix.go:190] guest clock delta is within tolerance: 84.087305ms
	I0919 17:54:28.277559   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 20.02820818s
	I0919 17:54:28.277581   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.277863   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:28.280976   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281274   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.281314   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281491   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282065   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282362   47798 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:54:28.282425   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.282518   47798 ssh_runner.go:195] Run: cat /version.json
	I0919 17:54:28.282557   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.285235   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285574   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285626   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.285660   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285758   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.285980   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286009   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.286037   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.286133   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286185   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.286298   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.286345   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286479   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286613   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.377342   47798 ssh_runner.go:195] Run: systemctl --version
	I0919 17:54:28.402900   47798 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:54:28.551979   47798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:54:28.558949   47798 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:54:28.559040   47798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:54:28.574671   47798 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:54:28.574707   47798 start.go:469] detecting cgroup driver to use...
	I0919 17:54:28.574789   47798 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:54:28.589301   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:54:28.603381   47798 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:54:28.603456   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:54:28.616574   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:54:28.630029   47798 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:54:28.735665   47798 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:54:28.855576   47798 docker.go:212] disabling docker service ...
	I0919 17:54:28.855656   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:54:28.869977   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:54:28.883344   47798 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:54:29.010033   47798 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:54:29.123737   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:54:29.136560   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:54:29.153418   47798 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:54:29.153472   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.164328   47798 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:54:29.164376   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.175468   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.186361   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.197606   47798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:54:29.209144   47798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:54:29.219566   47798 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:54:29.219608   47798 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:54:29.232771   47798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:54:29.241491   47798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:54:29.363253   47798 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:54:29.564774   47798 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:54:29.564853   47798 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:54:29.570170   47798 start.go:537] Will wait 60s for crictl version
	I0919 17:54:29.570236   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:29.574361   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:54:29.613496   47798 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:54:29.613591   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.668331   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.724060   47798 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0919 17:54:29.725565   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:29.728603   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729060   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:29.729090   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729325   47798 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0919 17:54:29.733860   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:29.745878   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:54:29.745937   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:29.783853   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:29.783912   47798 ssh_runner.go:195] Run: which lz4
	I0919 17:54:29.787843   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:54:29.792095   47798 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:54:29.792124   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0919 17:54:31.578682   47798 crio.go:444] Took 1.790863 seconds to copy over tarball
	I0919 17:54:31.578766   47798 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:54:34.491190   47798 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.912396501s)
	I0919 17:54:34.491218   47798 crio.go:451] Took 2.912514 seconds to extract the tarball
	I0919 17:54:34.491227   47798 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:54:34.532896   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:34.584238   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:34.584259   47798 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:54:34.584318   47798 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.584343   47798 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:54:34.584357   47798 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.584378   47798 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.584540   47798 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.584551   47798 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.584565   47798 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.584321   47798 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.586253   47798 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.586228   47798 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.586234   47798 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:54:34.586352   47798 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.586266   47798 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586581   47798 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.759785   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0919 17:54:34.802920   47798 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0919 17:54:34.802955   47798 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0919 17:54:34.803013   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:34.807458   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0919 17:54:34.847013   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0919 17:54:34.847128   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852501   47798 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0919 17:54:34.852523   47798 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852579   47798 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.853807   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.857117   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.858504   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.859676   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.868306   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.920560   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:35.645907   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:37.386271   47798 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.533664793s)
	I0919 17:54:37.386302   47798 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0919 17:54:37.386337   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2: (2.532490506s)
	I0919 17:54:37.386377   47798 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0919 17:54:37.386391   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0: (2.529252811s)
	I0919 17:54:37.386410   47798 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.386437   47798 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0919 17:54:37.386458   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386462   47798 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.386469   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0: (2.527943734s)
	I0919 17:54:37.386508   47798 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0919 17:54:37.386516   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386529   47798 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.386549   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0: (2.526835511s)
	I0919 17:54:37.386581   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0: (2.518230422s)
	I0919 17:54:37.386605   47798 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0919 17:54:37.386609   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0: (2.466014033s)
	I0919 17:54:37.386609   47798 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0919 17:54:37.386628   47798 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.386629   47798 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.386638   47798 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0919 17:54:37.386566   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386662   47798 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.386765   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386701   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.740765346s)
	I0919 17:54:37.399029   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.399077   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.399121   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.399122   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.402150   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.402313   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.540994   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0919 17:54:37.541026   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0919 17:54:37.541059   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0919 17:54:37.541106   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0919 17:54:37.541145   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0919 17:54:37.549028   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0919 17:54:37.549081   47798 cache_images.go:92] LoadImages completed in 2.964810789s
	W0919 17:54:37.549147   47798 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0919 17:54:37.549230   47798 ssh_runner.go:195] Run: crio config
	I0919 17:54:37.603915   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:37.603954   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:37.603977   47798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:54:37.604007   47798 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100627 NodeName:old-k8s-version-100627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 17:54:37.604180   47798 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-100627"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-100627
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.182:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:54:37.604310   47798 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-100627 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:54:37.604383   47798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0919 17:54:37.614235   47798 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:54:37.614296   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:54:37.623423   47798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0919 17:54:37.640384   47798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:54:37.656081   47798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0919 17:54:37.672787   47798 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0919 17:54:37.676417   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:37.687828   47798 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627 for IP: 192.168.72.182
	I0919 17:54:37.687874   47798 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:54:37.688058   47798 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:54:37.688143   47798 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:54:37.688222   47798 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.key
	I0919 17:54:37.688279   47798 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032
	I0919 17:54:37.688322   47798 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key
	I0919 17:54:37.688488   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:54:37.688531   47798 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:54:37.688546   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:54:37.688579   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:54:37.688609   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:54:37.688636   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:54:37.688697   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:37.689406   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:54:37.714671   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:54:37.737884   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:54:37.761839   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:54:37.784692   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:54:37.810865   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:54:37.832897   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:54:37.856026   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:54:37.879335   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:54:37.902377   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:54:37.924388   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:54:37.948816   47798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:54:37.965669   47798 ssh_runner.go:195] Run: openssl version
	I0919 17:54:37.971227   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:54:37.983269   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988756   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988807   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.994392   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:54:38.006098   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:54:38.017868   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022601   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022655   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.028421   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:54:38.039288   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:54:38.053131   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057881   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057938   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.063816   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:54:38.074972   47798 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:54:38.080260   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:54:38.085942   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:54:38.091638   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:54:38.097282   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:54:38.103194   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:54:38.109759   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:54:38.115202   47798 kubeadm.go:404] StartCluster: {Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:54:38.115274   47798 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:54:38.115313   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:38.153988   47798 cri.go:89] found id: ""
	I0919 17:54:38.154063   47798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:54:38.164888   47798 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:54:38.164913   47798 kubeadm.go:636] restartCluster start
	I0919 17:54:38.164965   47798 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:54:38.174810   47798 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.175856   47798 kubeconfig.go:92] found "old-k8s-version-100627" server: "https://192.168.72.182:8443"
	I0919 17:54:38.178372   47798 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:54:38.187917   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.187969   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.199654   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.199674   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.199715   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.211155   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.712221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.712312   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.725306   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.211431   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.211494   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.223919   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.711400   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.711482   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.724103   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.211311   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.211379   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.224111   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.711529   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.711609   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.724291   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.212183   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.212285   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.225226   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.711742   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.711821   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.724590   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.212221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.212289   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.225772   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.711304   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.711378   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.724468   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.211895   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.211978   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.225017   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.711734   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.711824   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.724995   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.211535   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.211616   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.224372   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.712113   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.712179   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.725330   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.211942   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.212027   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.226290   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.712216   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.712295   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.725065   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.212053   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.212150   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.226417   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.711997   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.712082   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.725608   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.212214   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.212300   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.224935   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.711452   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.711540   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.723970   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:48.188749   47798 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:54:48.188785   47798 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:54:48.188800   47798 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 17:54:48.188862   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:48.227729   47798 cri.go:89] found id: ""
	I0919 17:54:48.227789   47798 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:54:48.243618   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:54:48.253221   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:54:48.253285   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262806   47798 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262831   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:48.405093   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.114151   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.324152   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.457833   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.554530   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:54:49.554595   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:49.568050   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.092864   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.592484   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.092979   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.114757   47798 api_server.go:72] duration metric: took 1.560225697s to wait for apiserver process to appear ...
	I0919 17:54:51.114781   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:54:51.114800   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:56.115914   47798 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 17:54:56.115962   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:57.769883   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:54:57.769915   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:54:58.270598   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.278169   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.278210   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:58.770880   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.778649   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.778679   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:59.270233   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:59.276275   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 17:54:59.283868   47798 api_server.go:141] control plane version: v1.16.0
	I0919 17:54:59.283896   47798 api_server.go:131] duration metric: took 8.169106612s to wait for apiserver health ...
	I0919 17:54:59.283908   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:59.283916   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:59.285960   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:54:59.287537   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:54:59.298142   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:54:59.315861   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:54:59.324878   47798 system_pods.go:59] 8 kube-system pods found
	I0919 17:54:59.324917   47798 system_pods.go:61] "coredns-5644d7b6d9-4mh4f" [382ef590-a6ef-4402-8762-1649f060fbc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324940   47798 system_pods.go:61] "coredns-5644d7b6d9-wqwp7" [8756ca49-2953-422d-a534-6d1fa5655fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324947   47798 system_pods.go:61] "etcd-old-k8s-version-100627" [1e7bdb28-9c7e-4cae-a87e-ec2fad64e820] Running
	I0919 17:54:59.324955   47798 system_pods.go:61] "kube-apiserver-old-k8s-version-100627" [59a703b6-7c16-48ba-8a78-c1ecd606f138] Running
	I0919 17:54:59.324966   47798 system_pods.go:61] "kube-controller-manager-old-k8s-version-100627" [ac10d741-9a7d-45a1-86f5-a912075b49b9] Running
	I0919 17:54:59.324971   47798 system_pods.go:61] "kube-proxy-j7kqn" [79381ec1-45a7-4424-8383-f97b530979d3] Running
	I0919 17:54:59.324986   47798 system_pods.go:61] "kube-scheduler-old-k8s-version-100627" [40df95ee-b184-48ff-b276-d01c7763c7fc] Running
	I0919 17:54:59.324993   47798 system_pods.go:61] "storage-provisioner" [00e5e0c9-0453-440b-aa5c-e6811f428297] Running
	I0919 17:54:59.325005   47798 system_pods.go:74] duration metric: took 9.119135ms to wait for pod list to return data ...
	I0919 17:54:59.325017   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:54:59.328813   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:54:59.328845   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 17:54:59.328859   47798 node_conditions.go:105] duration metric: took 3.833575ms to run NodePressure ...
	I0919 17:54:59.328879   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:59.658953   47798 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:54:59.662655   47798 retry.go:31] will retry after 352.037588ms: kubelet not initialised
	I0919 17:55:00.020425   47798 retry.go:31] will retry after 411.927656ms: kubelet not initialised
	I0919 17:55:00.438027   47798 retry.go:31] will retry after 483.370654ms: kubelet not initialised
	I0919 17:55:00.928598   47798 retry.go:31] will retry after 987.946924ms: kubelet not initialised
	I0919 17:55:01.923328   47798 retry.go:31] will retry after 1.679023275s: kubelet not initialised
	I0919 17:55:03.607494   47798 retry.go:31] will retry after 1.92599571s: kubelet not initialised
	I0919 17:55:05.539070   47798 retry.go:31] will retry after 2.735570072s: kubelet not initialised
	I0919 17:55:08.280198   47798 retry.go:31] will retry after 4.516491636s: kubelet not initialised
	I0919 17:55:12.803629   47798 retry.go:31] will retry after 9.24421999s: kubelet not initialised
	I0919 17:55:22.053509   47798 retry.go:31] will retry after 10.860983763s: kubelet not initialised
	I0919 17:55:32.921288   47798 retry.go:31] will retry after 19.590918142s: kubelet not initialised
	I0919 17:55:52.517612   47798 kubeadm.go:787] kubelet initialised
	I0919 17:55:52.517637   47798 kubeadm.go:788] duration metric: took 52.858662322s waiting for restarted kubelet to initialise ...
	I0919 17:55:52.517644   47798 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:55:52.523992   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530133   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.530151   47798 pod_ready.go:81] duration metric: took 6.127596ms waiting for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530160   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535186   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.535202   47798 pod_ready.go:81] duration metric: took 5.035759ms waiting for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535209   47798 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540300   47798 pod_ready.go:92] pod "etcd-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.540317   47798 pod_ready.go:81] duration metric: took 5.101572ms waiting for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540324   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546670   47798 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.546687   47798 pod_ready.go:81] duration metric: took 6.356984ms waiting for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546696   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916320   47798 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.916342   47798 pod_ready.go:81] duration metric: took 369.639886ms waiting for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916353   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316733   47798 pod_ready.go:92] pod "kube-proxy-j7kqn" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.316762   47798 pod_ready.go:81] duration metric: took 400.400609ms waiting for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316788   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717319   47798 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.717344   47798 pod_ready.go:81] duration metric: took 400.544097ms waiting for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717358   47798 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:56.023621   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:55:58.025543   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:00.522985   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:02.523350   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:05.022971   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:07.023767   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:09.524598   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:12.024269   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:14.524109   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:16.525347   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:19.025990   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:21.522712   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:23.523098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:25.525823   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:27.526575   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:30.023751   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:32.023914   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:34.523709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:37.025284   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:39.523886   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:42.023525   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:44.023602   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:46.524942   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:49.023162   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:51.025968   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:53.523737   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:55.524950   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:58.023648   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:00.024635   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:02.024981   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:04.524374   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:07.024495   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:09.523646   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:12.023778   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:14.024012   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:16.024668   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:18.524581   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:20.525264   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:23.024223   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:25.024271   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:27.024863   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:29.524389   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:31.524867   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:34.026361   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:36.523516   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:38.523641   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:40.525417   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:43.023938   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:45.024235   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:47.025554   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:49.524344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:52.023880   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:54.024324   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:56.024615   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:58.523806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:00.524330   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:02.524813   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:05.023667   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:07.024328   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:09.521983   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:11.524126   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:14.033167   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:16.524193   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:19.023478   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:21.023719   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:23.024876   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:25.525000   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:28.022897   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:30.023651   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:32.523506   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:35.023201   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:37.024229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:39.522709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:41.524752   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:44.022121   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:46.025229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:48.523728   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:50.524600   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:53.024769   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:55.523745   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:58.025806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:00.524396   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:03.023037   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:05.023335   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:07.024052   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:09.024205   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:11.523020   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:13.524065   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:16.025098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:18.523293   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:20.525391   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:23.025049   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:25.522619   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:27.525208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:30.024344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:32.024984   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:34.523267   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:36.524365   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:39.023558   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:41.523208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:43.524139   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:46.023918   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:48.523431   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:50.523998   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.024150   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.718434   47798 pod_ready.go:81] duration metric: took 4m0.001059167s waiting for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	E0919 17:59:53.718466   47798 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:59:53.718484   47798 pod_ready.go:38] duration metric: took 4m1.200831266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:59:53.718520   47798 kubeadm.go:640] restartCluster took 5m15.553599416s
	W0919 17:59:53.718575   47798 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:59:53.718604   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:59:58.500835   47798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.782205666s)
	I0919 17:59:58.500900   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:59:58.514207   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:59:58.524054   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:59:58.532896   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:59:58.532945   47798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 17:59:58.588089   47798 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0919 17:59:58.588197   47798 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:59:58.739994   47798 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:59:58.740116   47798 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:59:58.740291   47798 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:59:58.968628   47798 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:59:58.968805   47798 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:59:58.977284   47798 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0919 17:59:59.111196   47798 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:59:59.113466   47798 out.go:204]   - Generating certificates and keys ...
	I0919 17:59:59.113599   47798 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:59:59.113711   47798 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:59:59.113854   47798 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:59:59.113938   47798 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:59:59.114070   47798 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:59:59.114144   47798 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:59:59.114911   47798 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:59:59.115382   47798 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:59:59.115986   47798 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:59:59.116548   47798 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:59:59.116630   47798 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:59:59.116713   47798 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:59:59.334495   47798 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:59:59.627886   47798 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:59:59.967368   47798 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:00:00.114260   47798 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:00:00.115507   47798 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:00:00.117811   47798 out.go:204]   - Booting up control plane ...
	I0919 18:00:00.117935   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:00:00.122651   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:00:00.125112   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:00:00.126687   47798 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:00:00.129807   47798 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 18:00:11.635043   47798 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504905 seconds
	I0919 18:00:11.635206   47798 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:00:11.654058   47798 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:00:12.194702   47798 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:00:12.194899   47798 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-100627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 18:00:12.704504   47798 kubeadm.go:322] [bootstrap-token] Using token: exrkug.z0q4aqb4emd0lkvm
	I0919 18:00:12.706136   47798 out.go:204]   - Configuring RBAC rules ...
	I0919 18:00:12.706241   47798 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:00:12.721292   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:00:12.729553   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:00:12.735434   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:00:12.739232   47798 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:00:12.816288   47798 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 18:00:13.140789   47798 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 18:00:13.142170   47798 kubeadm.go:322] 
	I0919 18:00:13.142257   47798 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 18:00:13.142268   47798 kubeadm.go:322] 
	I0919 18:00:13.142338   47798 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 18:00:13.142348   47798 kubeadm.go:322] 
	I0919 18:00:13.142382   47798 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 18:00:13.142468   47798 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:00:13.142554   47798 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:00:13.142571   47798 kubeadm.go:322] 
	I0919 18:00:13.142642   47798 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 18:00:13.142734   47798 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:00:13.142826   47798 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:00:13.142841   47798 kubeadm.go:322] 
	I0919 18:00:13.142952   47798 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0919 18:00:13.143062   47798 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 18:00:13.143073   47798 kubeadm.go:322] 
	I0919 18:00:13.143177   47798 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143336   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 18:00:13.143374   47798 kubeadm.go:322]     --control-plane 	  
	I0919 18:00:13.143387   47798 kubeadm.go:322] 
	I0919 18:00:13.143501   47798 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:00:13.143511   47798 kubeadm.go:322] 
	I0919 18:00:13.143613   47798 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143744   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 18:00:13.144341   47798 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:00:13.144373   47798 cni.go:84] Creating CNI manager for ""
	I0919 18:00:13.144392   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:00:13.146075   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:00:13.148011   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:00:13.159265   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 18:00:13.178271   47798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:00:13.178388   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.178420   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=old-k8s-version-100627 minikube.k8s.io/updated_at=2023_09_19T18_00_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.212392   47798 ops.go:34] apiserver oom_adj: -16
	I0919 18:00:13.509743   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.611752   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.210418   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.710689   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.210316   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.710515   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.210852   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.710451   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.210179   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.710559   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.210390   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.710683   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.210573   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.710581   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.210732   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.710461   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.210702   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.709813   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.209903   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.709847   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.210276   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.710692   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.210645   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.710835   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.209793   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.710473   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.209945   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.710136   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.210552   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.710679   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.209990   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.365531   47798 kubeadm.go:1081] duration metric: took 15.187210441s to wait for elevateKubeSystemPrivileges.
	I0919 18:00:28.365564   47798 kubeadm.go:406] StartCluster complete in 5m50.250366407s
	I0919 18:00:28.365586   47798 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.365675   47798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:00:28.368279   47798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.368566   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:00:28.368696   47798 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 18:00:28.368769   47798 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368797   47798 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-100627"
	I0919 18:00:28.368803   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 18:00:28.368850   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368863   47798 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368878   47798 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-100627"
	W0919 18:00:28.368886   47798 addons.go:240] addon metrics-server should already be in state true
	I0919 18:00:28.368922   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368851   47798 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368982   47798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100627"
	I0919 18:00:28.369268   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369273   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369292   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369294   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369392   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369412   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.389023   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0919 18:00:28.389631   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.389718   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
	I0919 18:00:28.390023   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390257   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0919 18:00:28.390523   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390547   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390646   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390895   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391311   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391391   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.391418   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.391709   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391712   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391748   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391757   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391791   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391838   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.410811   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0919 18:00:28.410846   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0919 18:00:28.411329   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411366   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411777   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411796   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.411888   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411905   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.412177   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412219   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412326   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.412402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.414149   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.417333   47798 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 18:00:28.414621   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.419038   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:00:28.419051   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:00:28.419071   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.420833   47798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:00:28.422332   47798 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.422358   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:00:28.422378   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.422103   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.422902   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.422992   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.423016   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.423112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.423305   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.423474   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.425328   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425845   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.425869   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425895   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.426078   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.426219   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.426322   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.464699   47798 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-100627"
	I0919 18:00:28.464737   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.465028   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.465059   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.479442   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0919 18:00:28.479839   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.480266   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.480294   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.480676   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.481211   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.481248   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.495810   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0919 18:00:28.496299   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.496709   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.496740   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.497099   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.497375   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.499150   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.499406   47798 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.499420   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:00:28.499434   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.502227   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.502653   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502792   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.502961   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.503112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.503256   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.738306   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:00:28.738334   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 18:00:28.739481   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.753537   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.807289   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:00:28.807321   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:00:28.904080   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:28.904107   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:00:28.991114   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:29.327327   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:00:29.371292   47798 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-100627" context rescaled to 1 replicas
	I0919 18:00:29.371337   47798 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:00:29.373222   47798 out.go:177] * Verifying Kubernetes components...
	I0919 18:00:29.374912   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:00:30.105746   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366227457s)
	I0919 18:00:30.105776   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.352204878s)
	I0919 18:00:30.105793   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105805   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.105814   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105827   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106180   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 18:00:30.106222   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106236   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106246   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106259   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106357   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106373   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106396   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106408   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106486   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106500   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106513   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106522   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106592   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106602   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106826   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106842   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.185977   47798 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0919 18:00:30.185980   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.194821805s)
	I0919 18:00:30.186035   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186031   47798 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.186049   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186367   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186383   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186393   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186647   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186671   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186681   47798 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-100627"
	I0919 18:00:30.188971   47798 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 18:00:30.190949   47798 addons.go:502] enable addons completed in 1.822257993s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 18:00:30.236503   47798 node_ready.go:49] node "old-k8s-version-100627" has status "Ready":"True"
	I0919 18:00:30.236526   47798 node_ready.go:38] duration metric: took 50.473068ms waiting for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.236538   47798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:30.243959   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:32.262563   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:34.263997   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:36.762957   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:37.763670   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.763694   47798 pod_ready.go:81] duration metric: took 7.519708991s waiting for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.763704   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769351   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.769371   47798 pod_ready.go:81] duration metric: took 5.660975ms waiting for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769382   47798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773846   47798 pod_ready.go:92] pod "kube-proxy-x7p9v" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.773866   47798 pod_ready.go:81] duration metric: took 4.476479ms waiting for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773879   47798 pod_ready.go:38] duration metric: took 7.537327576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:37.773896   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:00:37.773947   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:00:37.789245   47798 api_server.go:72] duration metric: took 8.417877969s to wait for apiserver process to appear ...
	I0919 18:00:37.789267   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:00:37.789283   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 18:00:37.796929   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 18:00:37.798217   47798 api_server.go:141] control plane version: v1.16.0
	I0919 18:00:37.798233   47798 api_server.go:131] duration metric: took 8.960108ms to wait for apiserver health ...
	I0919 18:00:37.798240   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:00:37.802732   47798 system_pods.go:59] 5 kube-system pods found
	I0919 18:00:37.802751   47798 system_pods.go:61] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.802755   47798 system_pods.go:61] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.802759   47798 system_pods.go:61] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.802765   47798 system_pods.go:61] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.802771   47798 system_pods.go:61] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.802775   47798 system_pods.go:74] duration metric: took 4.531294ms to wait for pod list to return data ...
	I0919 18:00:37.802781   47798 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:00:37.805090   47798 default_sa.go:45] found service account: "default"
	I0919 18:00:37.805108   47798 default_sa.go:55] duration metric: took 2.323003ms for default service account to be created ...
	I0919 18:00:37.805115   47798 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:00:37.809387   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:37.809412   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.809421   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.809428   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.809437   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.809445   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.809492   47798 retry.go:31] will retry after 308.50392ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.123229   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.123251   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.123256   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.123262   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.123271   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.123277   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.123291   47798 retry.go:31] will retry after 322.697394ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.452201   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.452227   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.452232   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.452236   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.452242   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.452248   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.452263   47798 retry.go:31] will retry after 457.851598ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.916270   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.916309   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.916318   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.916325   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.916336   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.916345   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.916367   47798 retry.go:31] will retry after 438.479707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:39.360169   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:39.360194   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:39.360199   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:39.360203   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:39.360210   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:39.360214   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:39.360228   47798 retry.go:31] will retry after 636.764599ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.002876   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.002902   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.002907   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.002911   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.002918   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.002922   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.002936   47798 retry.go:31] will retry after 763.456742ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.771715   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.771743   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.771751   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.771758   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.771768   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.771777   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.771794   47798 retry.go:31] will retry after 849.595493ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:41.628988   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:41.629014   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:41.629019   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:41.629024   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:41.629030   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:41.629035   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:41.629048   47798 retry.go:31] will retry after 1.130396523s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:42.765798   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:42.765825   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:42.765830   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:42.765834   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:42.765841   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:42.765846   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:42.765861   47798 retry.go:31] will retry after 1.444918771s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:44.216701   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:44.216726   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:44.216731   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:44.216735   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:44.216743   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:44.216751   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:44.216769   47798 retry.go:31] will retry after 2.010339666s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:46.233732   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:46.233764   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:46.233772   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:46.233779   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:46.233789   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:46.233798   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:46.233817   47798 retry.go:31] will retry after 2.386355588s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:48.625414   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:48.625451   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:48.625458   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:48.625463   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:48.625469   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:48.625478   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:48.625496   47798 retry.go:31] will retry after 3.40684833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:52.037490   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:52.037516   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:52.037522   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:52.037526   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:52.037532   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:52.037538   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:52.037553   47798 retry.go:31] will retry after 4.080274795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:56.123283   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:56.123307   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:56.123312   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:56.123316   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:56.123322   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:56.123327   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:56.123341   47798 retry.go:31] will retry after 4.076928493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:00.205817   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:00.205842   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:00.205848   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:00.205851   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:00.205860   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:00.205865   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:00.205880   47798 retry.go:31] will retry after 6.340158574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:06.551794   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:06.551821   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:06.551829   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:06.551835   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:06.551844   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:06.551852   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:06.551870   47798 retry.go:31] will retry after 8.178931758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:14.737898   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:14.737926   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:14.737934   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:14.737941   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:14.737947   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Pending
	I0919 18:01:14.737955   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:14.737961   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Pending
	I0919 18:01:14.737969   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:14.737977   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:14.737996   47798 retry.go:31] will retry after 7.690456991s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:22.435672   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:22.435706   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:22.435714   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:22.435721   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:22.435728   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Running
	I0919 18:01:22.435736   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:22.435744   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Running
	I0919 18:01:22.435755   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:22.435765   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:22.435782   47798 retry.go:31] will retry after 8.810480707s: missing components: kube-apiserver
	I0919 18:01:31.254171   47798 system_pods.go:86] 9 kube-system pods found
	I0919 18:01:31.254216   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:31.254223   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:31.254228   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:31.254233   47798 system_pods.go:89] "kube-apiserver-old-k8s-version-100627" [477571a2-c091-4d30-9c70-389556fade77] Running
	I0919 18:01:31.254240   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Running
	I0919 18:01:31.254246   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:31.254252   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Running
	I0919 18:01:31.254263   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:31.254278   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:31.254287   47798 system_pods.go:126] duration metric: took 53.449167375s to wait for k8s-apps to be running ...
	I0919 18:01:31.254295   47798 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:01:31.254346   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:01:31.270302   47798 system_svc.go:56] duration metric: took 16.000049ms WaitForService to wait for kubelet.
	I0919 18:01:31.270329   47798 kubeadm.go:581] duration metric: took 1m1.898967343s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 18:01:31.270356   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:01:31.273300   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 18:01:31.273324   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 18:01:31.273334   47798 node_conditions.go:105] duration metric: took 2.973337ms to run NodePressure ...
	I0919 18:01:31.273344   47798 start.go:228] waiting for startup goroutines ...
	I0919 18:01:31.273349   47798 start.go:233] waiting for cluster config update ...
	I0919 18:01:31.273358   47798 start.go:242] writing updated cluster config ...
	I0919 18:01:31.273601   47798 ssh_runner.go:195] Run: rm -f paused
	I0919 18:01:31.321319   47798 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0919 18:01:31.323360   47798 out.go:177] 
	W0919 18:01:31.324777   47798 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0919 18:01:31.326209   47798 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0919 18:01:31.327585   47798 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-100627" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:46:59 UTC, ends at Tue 2023-09-19 18:01:43 UTC. --
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.608237397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146503608213033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c7884388-2817-4d4c-98fd-2e6fd814e0ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.609270551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=86045bee-ba71-4fe7-9ab0-09663afe0327 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.609333178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=86045bee-ba71-4fe7-9ab0-09663afe0327 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.609600670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=86045bee-ba71-4fe7-9ab0-09663afe0327 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.653430228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=719e2a2d-ca49-43d0-8797-205553a897d4 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.653516458Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=719e2a2d-ca49-43d0-8797-205553a897d4 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.655022633Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=612de018-40f1-4274-ab58-dc68b1878415 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.655366115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146503655353948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=612de018-40f1-4274-ab58-dc68b1878415 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.656242961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e95ce270-f962-48b9-a157-3f419129acef name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.656321865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e95ce270-f962-48b9-a157-3f419129acef name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.656492725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e95ce270-f962-48b9-a157-3f419129acef name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.700950367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d658a900-70a4-4408-bcff-5a4b380a2cc7 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.701038679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d658a900-70a4-4408-bcff-5a4b380a2cc7 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.702885394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5bbc6aea-f8c3-45a9-baf5-6494fe700d85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.703362448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146503703342883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=5bbc6aea-f8c3-45a9-baf5-6494fe700d85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.709568846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1528192b-1606-4406-b0a8-b9efb54f6ee6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.709633817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1528192b-1606-4406-b0a8-b9efb54f6ee6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.709888816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1528192b-1606-4406-b0a8-b9efb54f6ee6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.750324398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d7f7ab4f-2a73-4f1c-87c8-77a6dde45fbf name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.750403072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d7f7ab4f-2a73-4f1c-87c8-77a6dde45fbf name=/runtime.v1.RuntimeService/Version
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.751784317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6ab25df2-2212-4e3d-b287-41e82f429606 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.752137993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146503752124923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=6ab25df2-2212-4e3d-b287-41e82f429606 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.753015556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=057bd375-7fe5-47a0-bfe1-61ed4ffba9d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.753091492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=057bd375-7fe5-47a0-bfe1-61ed4ffba9d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:01:43 no-preload-215748 crio[725]: time="2023-09-19 18:01:43.753261445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=057bd375-7fe5-47a0-bfe1-61ed4ffba9d4 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b7f19f67260b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2ceccaf32c260       storage-provisioner
	031b71aecf891       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   7ae3d7c4db9bf       coredns-5dd5756b68-n478x
	ee3fd4f8b5459       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   9 minutes ago       Running             kube-proxy                0                   9967028da52b4       kube-proxy-hk6k2
	3dee5d1bd72fd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   a442fa9c6b7c4       etcd-no-preload-215748
	093aa73f970a7       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   9 minutes ago       Running             kube-scheduler            2                   47e2bdc83d92f       kube-scheduler-no-preload-215748
	4f81a863b5a96       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   9 minutes ago       Running             kube-controller-manager   2                   38cb6bd4bb8b7       kube-controller-manager-no-preload-215748
	4c5b31233fe26       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   9 minutes ago       Running             kube-apiserver            2                   29db978b9b088       kube-apiserver-no-preload-215748
	
	* 
	* ==> coredns [031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53752 - 38869 "HINFO IN 5738684559045176053.3221094870797899799. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015826552s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-215748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-215748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=no-preload-215748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_52_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:52:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-215748
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 18:01:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:57:50 +0000   Tue, 19 Sep 2023 17:52:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:57:50 +0000   Tue, 19 Sep 2023 17:52:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:57:50 +0000   Tue, 19 Sep 2023 17:52:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:57:50 +0000   Tue, 19 Sep 2023 17:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    no-preload-215748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e0fa384a46344cdacc88ca2dc5a26a7
	  System UUID:                2e0fa384-a463-44cd-acc8-8ca2dc5a26a7
	  Boot ID:                    36753ce6-6f89-4c93-a64d-f62619ce8891
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-n478x                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-215748                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-215748             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-215748    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-hk6k2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-215748             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-nwxvc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-215748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-215748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-215748 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s  kubelet          Node no-preload-215748 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m9s   kubelet          Node no-preload-215748 status is now: NodeReady
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-215748 event: Registered Node no-preload-215748 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071309] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.392990] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.348083] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150874] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Sep19 17:47] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.397031] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.105390] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.133593] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.108373] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.223767] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[ +30.502310] systemd-fstab-generator[1227]: Ignoring "noauto" for root device
	[ +19.349722] kauditd_printk_skb: 29 callbacks suppressed
	[Sep19 17:52] systemd-fstab-generator[3819]: Ignoring "noauto" for root device
	[  +9.313451] systemd-fstab-generator[4146]: Ignoring "noauto" for root device
	[ +14.170864] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10] <==
	* {"level":"info","ts":"2023-09-19T17:52:19.355927Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:52:19.355993Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:52:19.356021Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-19T17:52:19.356295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f switched to configuration voters=(12312128054573816431)"}
	{"level":"info","ts":"2023-09-19T17:52:19.356424Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","added-peer-id":"aadd773bb1fe5a6f","added-peer-peer-urls":["https://192.168.39.15:2380"]}
	{"level":"info","ts":"2023-09-19T17:52:20.092781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:20.092869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:20.092886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f received MsgPreVoteResp from aadd773bb1fe5a6f at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:20.092897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.092902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f received MsgVoteResp from aadd773bb1fe5a6f at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.092915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became leader at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.092922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aadd773bb1fe5a6f elected leader aadd773bb1fe5a6f at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.096875Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aadd773bb1fe5a6f","local-member-attributes":"{Name:no-preload-215748 ClientURLs:[https://192.168.39.15:2379]}","request-path":"/0/members/aadd773bb1fe5a6f/attributes","cluster-id":"546e0a293cd37a14","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:52:20.097135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:20.098897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:20.109624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:20.109717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:20.110194Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:20.110947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.15:2379"}
	{"level":"info","ts":"2023-09-19T17:52:20.111264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:52:20.119067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:20.119173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:20.119219Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-09-19T17:54:36.74396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.85436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T17:54:36.744565Z","caller":"traceutil/trace.go:171","msg":"trace[1339816459] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:530; }","duration":"205.517583ms","start":"2023-09-19T17:54:36.539006Z","end":"2023-09-19T17:54:36.744524Z","steps":["trace[1339816459] 'range keys from in-memory index tree'  (duration: 204.783072ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:01:44 up 14 min,  0 users,  load average: 0.09, 0.24, 0.21
	Linux no-preload-215748 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53] <==
	* W0919 17:57:22.929639       1 handler_proxy.go:93] no RequestInfo found in the context
	W0919 17:57:22.929841       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:57:22.930009       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 17:57:22.930049       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 17:57:22.930011       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:57:22.932101       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:58:21.853351       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 17:58:22.930265       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:58:22.930344       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 17:58:22.930360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 17:58:22.932779       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:58:22.932907       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:58:22.932942       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:59:21.852817       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:00:21.853187       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:00:22.931383       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:00:22.931467       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:00:22.931484       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:00:22.933855       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:00:22.934118       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:00:22.934207       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:01:21.853580       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17] <==
	* I0919 17:56:07.824777       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:56:37.491106       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:56:37.834351       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:57:07.496971       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:57:07.842867       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:57:37.505762       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:57:37.851798       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:58:07.511409       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:58:07.864224       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 17:58:36.601539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.074036ms"
	E0919 17:58:37.518864       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:58:37.873435       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 17:58:47.599506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="108.659µs"
	E0919 17:59:07.525900       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:59:07.889399       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:59:37.532964       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:59:37.898784       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:00:07.540053       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:00:07.909385       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:00:37.547160       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:00:37.918580       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:01:07.553138       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:01:07.929268       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:01:37.559303       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:01:37.941517       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf] <==
	* I0919 17:52:40.152208       1 server_others.go:69] "Using iptables proxy"
	I0919 17:52:40.172024       1 node.go:141] Successfully retrieved node IP: 192.168.39.15
	I0919 17:52:40.565091       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:52:40.565150       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:52:40.573498       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:52:40.573574       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:52:40.573958       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:52:40.573969       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:52:40.576012       1 config.go:188] "Starting service config controller"
	I0919 17:52:40.576050       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:52:40.576079       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:52:40.576082       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:52:40.576595       1 config.go:315] "Starting node config controller"
	I0919 17:52:40.576601       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:52:40.677532       1 shared_informer.go:318] Caches are synced for node config
	I0919 17:52:40.677585       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:52:40.677610       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5] <==
	* W0919 17:52:21.952185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:52:21.952192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:52:22.781825       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 17:52:22.781899       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:52:22.783356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:52:22.783403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 17:52:22.828125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:52:22.828192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 17:52:22.844979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:52:22.845060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 17:52:22.916035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:22.916161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:22.923079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:52:22.923208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:52:23.053092       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:23.053162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:23.101789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:52:23.101872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 17:52:23.103736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:23.103783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:23.172542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:23.172599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:23.188450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 17:52:23.188503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0919 17:52:24.738304       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:46:59 UTC, ends at Tue 2023-09-19 18:01:44 UTC. --
	Sep 19 17:58:58 no-preload-215748 kubelet[4153]: E0919 17:58:58.581423    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 17:59:13 no-preload-215748 kubelet[4153]: E0919 17:59:13.580727    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 17:59:24 no-preload-215748 kubelet[4153]: E0919 17:59:24.582034    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 17:59:25 no-preload-215748 kubelet[4153]: E0919 17:59:25.702943    4153 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:59:25 no-preload-215748 kubelet[4153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:59:25 no-preload-215748 kubelet[4153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:59:25 no-preload-215748 kubelet[4153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:59:37 no-preload-215748 kubelet[4153]: E0919 17:59:37.582629    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 17:59:48 no-preload-215748 kubelet[4153]: E0919 17:59:48.580901    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:00:03 no-preload-215748 kubelet[4153]: E0919 18:00:03.581863    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:00:16 no-preload-215748 kubelet[4153]: E0919 18:00:16.580618    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:00:25 no-preload-215748 kubelet[4153]: E0919 18:00:25.703451    4153 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:00:25 no-preload-215748 kubelet[4153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:00:25 no-preload-215748 kubelet[4153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:00:25 no-preload-215748 kubelet[4153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:00:29 no-preload-215748 kubelet[4153]: E0919 18:00:29.581436    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:00:44 no-preload-215748 kubelet[4153]: E0919 18:00:44.580453    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:00:55 no-preload-215748 kubelet[4153]: E0919 18:00:55.583594    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:01:08 no-preload-215748 kubelet[4153]: E0919 18:01:08.581022    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:01:21 no-preload-215748 kubelet[4153]: E0919 18:01:21.583493    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:01:25 no-preload-215748 kubelet[4153]: E0919 18:01:25.701149    4153 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:01:25 no-preload-215748 kubelet[4153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:01:25 no-preload-215748 kubelet[4153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:01:25 no-preload-215748 kubelet[4153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:01:35 no-preload-215748 kubelet[4153]: E0919 18:01:35.581781    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	
	* 
	* ==> storage-provisioner [9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0] <==
	* I0919 17:52:41.467806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 17:52:41.478616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 17:52:41.478791       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 17:52:41.488900       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 17:52:41.489135       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-215748_1bce18fa-9d8d-4d2b-b5db-0cb56d567d82!
	I0919 17:52:41.491736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e5d2114-7188-4d71-ade0-8ca69d575004", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-215748_1bce18fa-9d8d-4d2b-b5db-0cb56d567d82 became leader
	I0919 17:52:41.589744       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-215748_1bce18fa-9d8d-4d2b-b5db-0cb56d567d82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215748 -n no-preload-215748
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-215748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nwxvc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-215748 describe pod metrics-server-57f55c9bc5-nwxvc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-215748 describe pod metrics-server-57f55c9bc5-nwxvc: exit status 1 (66.693776ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nwxvc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-215748 describe pod metrics-server-57f55c9bc5-nwxvc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0919 17:53:21.263442   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:54:44.310766   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:56:14.061291   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 17:57:56.282185   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:58:21.263150   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:59:19.335087   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 18:01:14.061194   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-415155 -n embed-certs-415155
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:02:13.144655573 +0000 UTC m=+5266.076639621
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-415155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-415155 logs -n 25: (1.317025841s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-512928 -- sudo                         | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-512928                                 | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-367630                            | force-systemd-env-367630     | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC | 19 Sep 23 17:52 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100627        | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC | 19 Sep 23 17:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100627             | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC | 19 Sep 23 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:49:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:49:25.690379   47798 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:49:25.690666   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690680   47798 out.go:309] Setting ErrFile to fd 2...
	I0919 17:49:25.690688   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690866   47798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:49:25.691435   47798 out.go:303] Setting JSON to false
	I0919 17:49:25.692368   47798 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5516,"bootTime":1695140250,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:49:25.692468   47798 start.go:138] virtualization: kvm guest
	I0919 17:49:25.694628   47798 out.go:177] * [old-k8s-version-100627] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:49:25.696349   47798 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:49:25.696345   47798 notify.go:220] Checking for updates...
	I0919 17:49:25.697700   47798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:49:25.699081   47798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:49:25.700392   47798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:49:25.701684   47798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:49:25.704016   47798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:49:25.705911   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:49:25.706464   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.706525   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.722505   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0919 17:49:25.722936   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.723454   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.723479   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.723851   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.724042   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.726028   47798 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:49:25.727479   47798 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:49:25.727787   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.727829   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.743272   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0919 17:49:25.743700   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.744180   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.744206   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.744589   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.744775   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.781696   47798 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:49:25.783056   47798 start.go:298] selected driver: kvm2
	I0919 17:49:25.783069   47798 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.783172   47798 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:49:25.783797   47798 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.783868   47798 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:49:25.797796   47798 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:49:25.798190   47798 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:49:25.798229   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:49:25.798239   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:49:25.798254   47798 start_flags.go:321] config:
	{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.798391   47798 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.800110   47798 out.go:177] * Starting control plane node old-k8s-version-100627 in cluster old-k8s-version-100627
	I0919 17:49:25.801393   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:49:25.801433   47798 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 17:49:25.801447   47798 cache.go:57] Caching tarball of preloaded images
	I0919 17:49:25.801545   47798 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:49:25.801559   47798 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0919 17:49:25.801689   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:49:25.801924   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:49:25.801971   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 26.483µs
	I0919 17:49:25.801985   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:49:25.801989   47798 fix.go:54] fixHost starting: 
	I0919 17:49:25.802270   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.802300   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.816968   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0919 17:49:25.817484   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.818034   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.818069   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.818376   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.818564   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.818799   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:49:25.820610   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Running err=<nil>
	W0919 17:49:25.820646   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:49:25.822656   47798 out.go:177] * Updating the running kvm2 "old-k8s-version-100627" VM ...
	I0919 17:49:25.475965   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:27.476794   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:24.179260   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.686283   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.993419   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:28.995394   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:25.824024   47798 machine.go:88] provisioning docker machine ...
	I0919 17:49:25.824053   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.824279   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824480   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:49:25.824508   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824671   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:49:25.827416   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.827890   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:49:25.827920   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.828092   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:49:25.828287   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828490   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828642   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:49:25.828819   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:49:25.829172   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:49:25.829188   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:49:28.724736   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:29.976563   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.976829   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:29.180775   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.677584   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.678666   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.493348   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.495016   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.796651   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:33.977341   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.477521   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.178183   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:38.679802   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:35.495920   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.993770   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:39.994165   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.876662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:38.477642   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.977376   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:41.177699   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:43.178895   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:42.494311   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:44.494974   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.948690   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:43.476725   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.477936   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.977074   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.678443   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:48.178687   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:46.994529   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:49.494895   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.028682   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.100607   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.476569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.478246   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:50.179250   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.180827   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:51.994091   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.494911   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.480792   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.978326   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.678236   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.678493   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.678539   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.496729   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.993989   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:59.224657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:59.476603   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.477023   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:00.678913   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.178281   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.494409   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.993808   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:02.292662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:03.477796   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.976205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.180836   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.678312   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.994188   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.999270   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:08.372675   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:08.476522   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.976260   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:09.679568   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.179377   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.494291   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.995682   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:11.444679   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:13.476906   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.478193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.976583   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:14.679325   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:16.690040   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.496998   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.993599   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.993922   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.524614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.596688   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.476110   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.477330   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.184902   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:21.678830   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:23.679261   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.494626   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.993912   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.976379   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.976627   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.177309   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:28.179300   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:27.494133   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:29.494473   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.676677   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:29.748706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:28.976722   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.980716   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.678715   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.177789   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:31.993563   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.995728   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.476205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.975739   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.978115   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.178188   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.178328   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:36.493541   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:38.494380   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.832612   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:38.900652   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:40.476580   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.476989   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:39.180279   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:41.678338   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:43.678611   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:40.993785   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.994446   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:44.980626   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:44.976641   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.977032   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.178379   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.179405   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:45.494929   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:47.993704   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:49.995192   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.052702   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:48.977244   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:51.477325   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:50.678663   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:53.178707   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:52.493646   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.494478   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.132706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:53.477737   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.977429   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.978145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.678855   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.177724   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:56.993145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.994370   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.208643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:00.476193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.476286   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:00.178398   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.677951   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:01.501993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.993491   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.288721   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:04.476795   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.976387   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.177376   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:07.178224   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.995006   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:08.494405   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.360657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:08.977404   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.475407   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:09.178322   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.179143   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:13.180235   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:10.494521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.993993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.436681   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:15.508678   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:13.975736   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.977800   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.679181   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.177065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.494642   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:17.494846   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:19.993481   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.475821   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.476773   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.976145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.178065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.178249   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.993613   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:23.994655   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.588622   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.660703   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.976569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:27.476021   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:24.678762   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.682314   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.493981   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:28.494262   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.477183   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.976125   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.178390   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.178551   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:33.678277   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.495041   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:32.993120   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.740717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.816640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.977079   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.475678   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.179024   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:38.678508   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:35.495368   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:37.994521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:39.892631   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:38.476601   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.978279   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:41.178365   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:43.678896   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.493826   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.992893   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:44.993574   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.968646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:43.478156   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:45.976257   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.977272   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:46.178127   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:48.178192   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.494860   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.993714   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.044674   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:50.476391   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.976686   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:50.678434   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:53.177908   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:51.995140   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:54.494996   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.116699   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:54.977835   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.475875   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:55.178219   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.179598   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:56.992881   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.994100   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.200619   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:59.476340   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.975559   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:59.678336   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:00.158668   45961 pod_ready.go:81] duration metric: took 4m0.000408372s waiting for pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:00.158710   45961 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:00.158733   45961 pod_ready.go:38] duration metric: took 4m12.69690087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:00.158768   45961 kubeadm.go:640] restartCluster took 4m32.67884897s
	W0919 17:52:00.158862   45961 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:00.158899   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:00.995208   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:03.493604   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.272609   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:03.976776   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:06.478653   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:05.495181   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.995025   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.348614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:10.424641   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:08.170853   46282 pod_ready.go:81] duration metric: took 4m0.00010513s waiting for pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:08.170890   46282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:08.170903   46282 pod_ready.go:38] duration metric: took 4m5.202195097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:08.170929   46282 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:08.170960   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:08.171010   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:08.229465   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.229484   46282 cri.go:89] found id: ""
	I0919 17:52:08.229491   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:08.229537   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.234379   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:08.234434   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:08.280999   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:08.281033   46282 cri.go:89] found id: ""
	I0919 17:52:08.281044   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:08.281097   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.285499   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:08.285561   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:08.327387   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.327413   46282 cri.go:89] found id: ""
	I0919 17:52:08.327423   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:08.327481   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.333158   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:08.333235   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:08.375921   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.375946   46282 cri.go:89] found id: ""
	I0919 17:52:08.375955   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:08.376008   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.380156   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:08.380220   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:08.425586   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:08.425613   46282 cri.go:89] found id: ""
	I0919 17:52:08.425620   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:08.425676   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.430229   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:08.430302   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:08.482920   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:08.482946   46282 cri.go:89] found id: ""
	I0919 17:52:08.482956   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:08.483017   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.488497   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:08.488559   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:08.543405   46282 cri.go:89] found id: ""
	I0919 17:52:08.543432   46282 logs.go:284] 0 containers: []
	W0919 17:52:08.543441   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:08.543449   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:08.543510   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:08.588287   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:08.588309   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:08.588314   46282 cri.go:89] found id: ""
	I0919 17:52:08.588326   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:08.588390   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.592986   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.597223   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:08.597245   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:08.648372   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:08.648400   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:08.705158   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:08.705203   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.754475   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:08.754511   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.797571   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:08.797603   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:08.950578   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:08.950617   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.998529   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:08.998555   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:09.039415   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:09.039445   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:09.081622   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:09.081657   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:09.095239   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:09.095269   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:09.141402   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:09.141429   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:09.186918   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:09.186953   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:09.244473   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:09.244508   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:12.216337   46282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:12.232741   46282 api_server.go:72] duration metric: took 4m15.890515742s to wait for apiserver process to appear ...
	I0919 17:52:12.232764   46282 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:12.232793   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:12.232844   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:12.279741   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:12.279769   46282 cri.go:89] found id: ""
	I0919 17:52:12.279780   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:12.279836   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.284490   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:12.284560   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:12.322547   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:12.322575   46282 cri.go:89] found id: ""
	I0919 17:52:12.322585   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:12.322648   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.326924   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:12.326981   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:12.376181   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:12.376201   46282 cri.go:89] found id: ""
	I0919 17:52:12.376208   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:12.376259   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.380831   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:12.380892   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:12.422001   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.422035   46282 cri.go:89] found id: ""
	I0919 17:52:12.422045   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:12.422112   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.426372   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:12.426456   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:12.474718   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:12.474739   46282 cri.go:89] found id: ""
	I0919 17:52:12.474749   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:12.474804   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.479781   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:12.479837   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:12.525008   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:12.525038   46282 cri.go:89] found id: ""
	I0919 17:52:12.525047   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:12.525106   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.529414   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:12.529480   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:12.573369   46282 cri.go:89] found id: ""
	I0919 17:52:12.573395   46282 logs.go:284] 0 containers: []
	W0919 17:52:12.573403   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:12.573410   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:12.573461   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:12.618041   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:12.618063   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:12.618067   46282 cri.go:89] found id: ""
	I0919 17:52:12.618074   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:12.618118   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.622248   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.626519   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:12.626537   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.667023   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:12.667052   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:13.123963   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:13.123996   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:10.495145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:12.994448   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:13.243498   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:13.243533   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:13.289172   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:13.289208   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:13.325853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:13.325883   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:13.363915   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:13.363943   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.412359   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:13.412394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:13.458675   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:13.458706   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:13.473516   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:13.473549   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:13.538694   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:13.538723   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:13.606826   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:13.606871   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:13.652363   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:13.652394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.204482   46282 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8444/healthz ...
	I0919 17:52:16.210733   46282 api_server.go:279] https://192.168.61.228:8444/healthz returned 200:
	ok
	I0919 17:52:16.212054   46282 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:16.212076   46282 api_server.go:131] duration metric: took 3.979306376s to wait for apiserver health ...
	I0919 17:52:16.212085   46282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:16.212106   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:16.212148   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:16.263882   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:16.263908   46282 cri.go:89] found id: ""
	I0919 17:52:16.263918   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:16.263978   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.268238   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:16.268291   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:16.309480   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.309504   46282 cri.go:89] found id: ""
	I0919 17:52:16.309511   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:16.309560   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.313860   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:16.313910   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:16.353715   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:16.353741   46282 cri.go:89] found id: ""
	I0919 17:52:16.353751   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:16.353812   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.358128   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:16.358194   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:16.398792   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.398811   46282 cri.go:89] found id: ""
	I0919 17:52:16.398818   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:16.398865   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.403410   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:16.403463   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:16.449884   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.449910   46282 cri.go:89] found id: ""
	I0919 17:52:16.449924   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:16.449966   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.454404   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:16.454462   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:16.500246   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:16.500265   46282 cri.go:89] found id: ""
	I0919 17:52:16.500274   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:16.500328   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.504468   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:16.504531   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:16.545865   46282 cri.go:89] found id: ""
	I0919 17:52:16.545888   46282 logs.go:284] 0 containers: []
	W0919 17:52:16.545895   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:16.545900   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:16.545953   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:16.584533   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.584560   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.584565   46282 cri.go:89] found id: ""
	I0919 17:52:16.584571   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:16.584619   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.588723   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.592429   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:16.592459   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:16.643853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:16.643884   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.693660   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:16.693697   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:16.710833   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:16.710860   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.769518   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:16.769548   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.819614   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:16.819645   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.860112   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:16.860154   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:16.918657   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:16.918687   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.962381   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:16.962412   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:17.304580   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:17.304618   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:17.449337   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:17.449368   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:17.522234   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:17.522268   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:17.581061   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:17.581093   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.986517   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.82758933s)
	I0919 17:52:13.986593   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:14.002396   45961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:14.012005   45961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:14.020952   45961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:14.021075   45961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:14.249350   45961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:20.161795   46282 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:20.161825   46282 system_pods.go:61] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.161833   46282 system_pods.go:61] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.161840   46282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.161845   46282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.161850   46282 system_pods.go:61] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.161856   46282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.161866   46282 system_pods.go:61] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.161876   46282 system_pods.go:61] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.161885   46282 system_pods.go:74] duration metric: took 3.949793054s to wait for pod list to return data ...
	I0919 17:52:20.161895   46282 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:20.165017   46282 default_sa.go:45] found service account: "default"
	I0919 17:52:20.165041   46282 default_sa.go:55] duration metric: took 3.138746ms for default service account to be created ...
	I0919 17:52:20.165051   46282 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:20.171771   46282 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:20.171798   46282 system_pods.go:89] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.171807   46282 system_pods.go:89] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.171815   46282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.171823   46282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.171841   46282 system_pods.go:89] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.171847   46282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.171858   46282 system_pods.go:89] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.171867   46282 system_pods.go:89] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.171879   46282 system_pods.go:126] duration metric: took 6.820805ms to wait for k8s-apps to be running ...
	I0919 17:52:20.171891   46282 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:20.171944   46282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:20.191948   46282 system_svc.go:56] duration metric: took 20.046863ms WaitForService to wait for kubelet.
	I0919 17:52:20.191977   46282 kubeadm.go:581] duration metric: took 4m23.849755591s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:20.192003   46282 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:20.198066   46282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:20.198090   46282 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:20.198101   46282 node_conditions.go:105] duration metric: took 6.093464ms to run NodePressure ...
	I0919 17:52:20.198113   46282 start.go:228] waiting for startup goroutines ...
	I0919 17:52:20.198122   46282 start.go:233] waiting for cluster config update ...
	I0919 17:52:20.198131   46282 start.go:242] writing updated cluster config ...
	I0919 17:52:20.198390   46282 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:20.260334   46282 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:20.262660   46282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-415555" cluster and "default" namespace by default
	I0919 17:52:15.493238   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:17.495147   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:19.497990   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:16.500634   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:19.572697   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.436229   45961 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:25.436332   45961 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:25.436448   45961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:25.436580   45961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:25.436693   45961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:25.436784   45961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:25.438740   45961 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:25.438831   45961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:25.438907   45961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:25.439035   45961 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:25.439117   45961 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:25.439225   45961 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:25.439306   45961 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:25.439378   45961 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:25.439455   45961 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:25.439554   45961 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:25.439646   45961 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:25.439692   45961 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:25.439759   45961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:25.439825   45961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:25.439892   45961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:25.439982   45961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:25.440068   45961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:25.440183   45961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:25.440276   45961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:25.441897   45961 out.go:204]   - Booting up control plane ...
	I0919 17:52:25.442005   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:25.442103   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:25.442163   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:25.442248   45961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:25.442343   45961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:25.442428   45961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:25.442641   45961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:25.442703   45961 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003935 seconds
	I0919 17:52:25.442819   45961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:25.442911   45961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:25.442959   45961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:25.443101   45961 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-215748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:25.443144   45961 kubeadm.go:322] [bootstrap-token] Using token: xzx8bb.31rxl0d2e5l1asvj
	I0919 17:52:25.444479   45961 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:25.444574   45961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:25.444640   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:25.444747   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:25.444886   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:25.445049   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:25.445178   45961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:25.445344   45961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:25.445403   45961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:25.445462   45961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:25.445475   45961 kubeadm.go:322] 
	I0919 17:52:25.445558   45961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:25.445569   45961 kubeadm.go:322] 
	I0919 17:52:25.445659   45961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:25.445672   45961 kubeadm.go:322] 
	I0919 17:52:25.445691   45961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:25.445740   45961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:25.445779   45961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:25.445785   45961 kubeadm.go:322] 
	I0919 17:52:25.445824   45961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:25.445830   45961 kubeadm.go:322] 
	I0919 17:52:25.445873   45961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:25.445879   45961 kubeadm.go:322] 
	I0919 17:52:25.445939   45961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:25.446038   45961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:25.446154   45961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:25.446172   45961 kubeadm.go:322] 
	I0919 17:52:25.446275   45961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:25.446361   45961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:25.446371   45961 kubeadm.go:322] 
	I0919 17:52:25.446473   45961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.446594   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:25.446623   45961 kubeadm.go:322] 	--control-plane 
	I0919 17:52:25.446641   45961 kubeadm.go:322] 
	I0919 17:52:25.446774   45961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:25.446782   45961 kubeadm.go:322] 
	I0919 17:52:25.446874   45961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.447044   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:25.447066   45961 cni.go:84] Creating CNI manager for ""
	I0919 17:52:25.447079   45961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:25.448742   45961 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:21.994034   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:24.494339   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:25.656705   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.450147   45961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:25.473476   45961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:25.529295   45961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:25.529383   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.529387   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=no-preload-215748 minikube.k8s.io/updated_at=2023_09_19T17_52_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.625308   45961 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:25.905954   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.037543   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.638479   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.138484   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.637901   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.138033   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.638787   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.494798   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:28.213192   45696 pod_ready.go:81] duration metric: took 4m0.001033854s waiting for pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:28.213226   45696 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:28.213243   45696 pod_ready.go:38] duration metric: took 4m12.067034727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:28.213266   45696 kubeadm.go:640] restartCluster took 4m32.254857032s
	W0919 17:52:28.213338   45696 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:28.213378   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:28.728646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:29.138616   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:29.638381   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.138155   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.637984   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.137977   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.638547   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.138617   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.638253   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.138335   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.638302   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.804640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:34.138702   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.638549   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.138431   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.638642   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.138000   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.638726   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.138394   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.315805   45961 kubeadm.go:1081] duration metric: took 11.786488266s to wait for elevateKubeSystemPrivileges.
	I0919 17:52:37.315840   45961 kubeadm.go:406] StartCluster complete in 5m9.899215362s
	I0919 17:52:37.315856   45961 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.315945   45961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:52:37.317563   45961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.317815   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:52:37.317844   45961 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:52:37.317936   45961 addons.go:69] Setting storage-provisioner=true in profile "no-preload-215748"
	I0919 17:52:37.317943   45961 addons.go:69] Setting default-storageclass=true in profile "no-preload-215748"
	I0919 17:52:37.317959   45961 addons.go:231] Setting addon storage-provisioner=true in "no-preload-215748"
	I0919 17:52:37.317963   45961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-215748"
	W0919 17:52:37.317967   45961 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:52:37.317964   45961 addons.go:69] Setting metrics-server=true in profile "no-preload-215748"
	I0919 17:52:37.317988   45961 addons.go:231] Setting addon metrics-server=true in "no-preload-215748"
	W0919 17:52:37.318000   45961 addons.go:240] addon metrics-server should already be in state true
	I0919 17:52:37.318016   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318041   45961 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:52:37.318051   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318380   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318407   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318416   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318429   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318475   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318495   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.334365   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0919 17:52:37.334822   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.335368   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.335395   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.335861   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.336052   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0919 17:52:37.337998   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338047   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338480   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338498   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338610   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338632   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338840   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.338941   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.339461   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339490   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.339536   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339565   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.354064   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
	I0919 17:52:37.354482   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.354893   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.354912   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.355353   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.355578   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.357181   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.359063   45961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:52:37.357674   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0919 17:52:37.358308   45961 addons.go:231] Setting addon default-storageclass=true in "no-preload-215748"
	W0919 17:52:37.360428   45961 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:52:37.360461   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.360569   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:52:37.360583   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:52:37.360602   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.360832   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.360869   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.360891   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.361393   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.361411   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.361836   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.362040   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.363959   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.364124   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.365928   45961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:52:37.364551   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.364765   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.367579   45961 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.367592   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:52:37.367609   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.367639   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.367660   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.367827   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.368140   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.370800   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371215   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.371240   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371416   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.371612   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.371777   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.371914   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.379222   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0919 17:52:37.379631   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.380097   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.380122   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.380481   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.381718   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.381754   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.396647   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0919 17:52:37.397058   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.397474   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.397492   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.397842   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.397994   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.399762   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.400224   45961 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.400239   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:52:37.400255   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.403299   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403745   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.403767   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.403773   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403948   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.404080   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.404221   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.448139   45961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-215748" context rescaled to 1 replicas
	I0919 17:52:37.448183   45961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:52:37.450076   45961 out.go:177] * Verifying Kubernetes components...
	I0919 17:52:37.451036   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:37.579553   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.592116   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.604757   45961 node_ready.go:35] waiting up to 6m0s for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.605235   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:52:37.611496   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:52:37.611523   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:52:37.625762   45961 node_ready.go:49] node "no-preload-215748" has status "Ready":"True"
	I0919 17:52:37.625782   45961 node_ready.go:38] duration metric: took 20.997061ms waiting for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.625790   45961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:37.638366   45961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.693993   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:52:37.694019   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:52:37.754746   45961 pod_ready.go:92] pod "etcd-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.754769   45961 pod_ready.go:81] duration metric: took 116.377819ms waiting for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.754782   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.798115   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:37.798139   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:52:37.815124   45961 pod_ready.go:92] pod "kube-apiserver-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.815192   45961 pod_ready.go:81] duration metric: took 60.393176ms waiting for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.815218   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.922999   45961 pod_ready.go:92] pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.923022   45961 pod_ready.go:81] duration metric: took 107.794672ms waiting for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.923038   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.995437   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:39.961838   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.382243112s)
	I0919 17:52:39.961884   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961893   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.961902   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.356635779s)
	I0919 17:52:39.961928   45961 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 17:52:39.961843   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.369699378s)
	I0919 17:52:39.961953   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961963   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962202   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962219   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962231   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962239   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962348   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962409   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962447   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962490   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962517   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962540   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962553   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962563   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962526   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962601   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962778   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962819   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962828   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962942   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962959   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962972   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064135   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.06864457s)
	I0919 17:52:40.064196   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064212   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064511   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064532   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064542   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064552   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064775   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064835   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064840   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064850   45961 addons.go:467] Verifying addon metrics-server=true in "no-preload-215748"
	I0919 17:52:40.066741   45961 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0919 17:52:37.876720   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:40.068231   45961 addons.go:502] enable addons completed in 2.750388313s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0919 17:52:40.249105   45961 pod_ready.go:102] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:40.760507   45961 pod_ready.go:92] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.760532   45961 pod_ready.go:81] duration metric: took 2.837485326s waiting for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.760546   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770519   45961 pod_ready.go:92] pod "kube-scheduler-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.770574   45961 pod_ready.go:81] duration metric: took 9.988955ms waiting for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770610   45961 pod_ready.go:38] duration metric: took 3.144808421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:40.770630   45961 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:40.770686   45961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:40.806513   45961 api_server.go:72] duration metric: took 3.358300901s to wait for apiserver process to appear ...
	I0919 17:52:40.806538   45961 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:40.806556   45961 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0919 17:52:40.812758   45961 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0919 17:52:40.813960   45961 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:40.813985   45961 api_server.go:131] duration metric: took 7.436946ms to wait for apiserver health ...
	I0919 17:52:40.813996   45961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:40.821498   45961 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:40.821525   45961 system_pods.go:61] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:40.821536   45961 system_pods.go:61] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:40.821543   45961 system_pods.go:61] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:40.821549   45961 system_pods.go:61] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:40.821555   45961 system_pods.go:61] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:40.821563   45961 system_pods.go:61] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:40.821572   45961 system_pods.go:61] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:40.821583   45961 system_pods.go:61] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:40.821599   45961 system_pods.go:74] duration metric: took 7.595377ms to wait for pod list to return data ...
	I0919 17:52:40.821608   45961 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:40.828423   45961 default_sa.go:45] found service account: "default"
	I0919 17:52:40.828446   45961 default_sa.go:55] duration metric: took 6.830774ms for default service account to be created ...
	I0919 17:52:40.828455   45961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:41.018524   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.018560   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.018569   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.018578   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.018585   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.018591   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.018601   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.018612   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.018625   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.018645   45961 retry.go:31] will retry after 307.254812ms: missing components: kube-dns
	I0919 17:52:41.337815   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.337844   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.337851   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.337856   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.337863   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.337869   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.337875   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.337883   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.337893   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.337915   45961 retry.go:31] will retry after 378.465105ms: missing components: kube-dns
	I0919 17:52:41.734680   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.734717   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.734728   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.734736   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.734743   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.734750   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.734757   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.734765   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.734780   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.734801   45961 retry.go:31] will retry after 432.849904ms: missing components: kube-dns
	I0919 17:52:42.176510   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:42.176536   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Running
	I0919 17:52:42.176545   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:42.176552   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:42.176559   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:42.176569   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:42.176576   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:42.176590   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:42.176603   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Running
	I0919 17:52:42.176616   45961 system_pods.go:126] duration metric: took 1.348155168s to wait for k8s-apps to be running ...
	I0919 17:52:42.176628   45961 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:42.176683   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:42.189952   45961 system_svc.go:56] duration metric: took 13.312874ms WaitForService to wait for kubelet.
	I0919 17:52:42.189981   45961 kubeadm.go:581] duration metric: took 4.741777133s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:42.190012   45961 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:42.194919   45961 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:42.194945   45961 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:42.194957   45961 node_conditions.go:105] duration metric: took 4.939533ms to run NodePressure ...
	I0919 17:52:42.194969   45961 start.go:228] waiting for startup goroutines ...
	I0919 17:52:42.194978   45961 start.go:233] waiting for cluster config update ...
	I0919 17:52:42.194988   45961 start.go:242] writing updated cluster config ...
	I0919 17:52:42.195287   45961 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:42.245669   45961 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:42.248021   45961 out.go:177] * Done! kubectl is now configured to use "no-preload-215748" cluster and "default" namespace by default
	I0919 17:52:41.936906   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.723493225s)
	I0919 17:52:41.936983   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:41.951451   45696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:41.960478   45696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:41.968960   45696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:41.969031   45696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:42.019868   45696 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:42.020027   45696 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:42.171083   45696 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:42.171221   45696 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:42.171332   45696 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:42.429760   45696 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:42.431619   45696 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:42.431770   45696 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:42.431870   45696 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:42.431973   45696 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:42.432172   45696 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:42.432781   45696 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:42.433451   45696 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:42.434353   45696 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:42.435577   45696 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:42.436820   45696 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:42.438302   45696 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:42.439391   45696 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:42.439509   45696 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:42.929570   45696 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:43.332709   45696 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:43.433651   45696 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:43.695104   45696 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:43.696103   45696 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:43.699874   45696 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:43.701784   45696 out.go:204]   - Booting up control plane ...
	I0919 17:52:43.701926   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:43.702063   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:43.702819   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:43.724659   45696 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:43.725576   45696 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:43.725671   45696 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:43.851582   45696 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:43.960637   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:47.032663   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:51.355564   45696 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504191 seconds
	I0919 17:52:51.355695   45696 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:51.376627   45696 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:51.908759   45696 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:51.909064   45696 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-415155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:52.424367   45696 kubeadm.go:322] [bootstrap-token] Using token: kntdz4.46i9d2q57hx70gnb
	I0919 17:52:52.425876   45696 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:52.425993   45696 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:52.433647   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:52.443514   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:52.447239   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:52.453258   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:52.459432   45696 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:52.475208   45696 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:52.722848   45696 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:52.841255   45696 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:52.841280   45696 kubeadm.go:322] 
	I0919 17:52:52.841356   45696 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:52.841369   45696 kubeadm.go:322] 
	I0919 17:52:52.841456   45696 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:52.841464   45696 kubeadm.go:322] 
	I0919 17:52:52.841502   45696 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:52.841568   45696 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:52.841637   45696 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:52.841648   45696 kubeadm.go:322] 
	I0919 17:52:52.841698   45696 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:52.841704   45696 kubeadm.go:322] 
	I0919 17:52:52.841745   45696 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:52.841780   45696 kubeadm.go:322] 
	I0919 17:52:52.841875   45696 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:52.841942   45696 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:52.842039   45696 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:52.842048   45696 kubeadm.go:322] 
	I0919 17:52:52.842134   45696 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:52.842243   45696 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:52.842262   45696 kubeadm.go:322] 
	I0919 17:52:52.842358   45696 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842491   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:52.842523   45696 kubeadm.go:322] 	--control-plane 
	I0919 17:52:52.842530   45696 kubeadm.go:322] 
	I0919 17:52:52.842645   45696 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:52.842659   45696 kubeadm.go:322] 
	I0919 17:52:52.842773   45696 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842930   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:52.844420   45696 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:52.844450   45696 cni.go:84] Creating CNI manager for ""
	I0919 17:52:52.844461   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:52.846322   45696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:52.848269   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:52.875578   45696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:52.905183   45696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:52.905261   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.905281   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=embed-certs-415155 minikube.k8s.io/updated_at=2023_09_19T17_52_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.993717   45696 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:53.208727   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.311165   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.904182   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.403711   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.904152   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:55.404377   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.108640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:55.903772   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.404320   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.904201   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.403637   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.904174   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.404553   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.903691   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.403716   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.903872   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:00.403725   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.180664   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:00.904540   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.404211   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.903897   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.403857   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.903841   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.404601   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.904222   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.404483   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.903813   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:05.404474   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.260629   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.332731   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.904337   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:06.003333   45696 kubeadm.go:1081] duration metric: took 13.098131801s to wait for elevateKubeSystemPrivileges.
	I0919 17:53:06.003365   45696 kubeadm.go:406] StartCluster complete in 5m10.10389936s
	I0919 17:53:06.003387   45696 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.003476   45696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:53:06.005541   45696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.005772   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:53:06.005785   45696 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:53:06.005854   45696 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-415155"
	I0919 17:53:06.005877   45696 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-415155"
	W0919 17:53:06.005884   45696 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:53:06.005926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.005930   45696 addons.go:69] Setting default-storageclass=true in profile "embed-certs-415155"
	I0919 17:53:06.005946   45696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-415155"
	I0919 17:53:06.005979   45696 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:53:06.005982   45696 addons.go:69] Setting metrics-server=true in profile "embed-certs-415155"
	I0919 17:53:06.006009   45696 addons.go:231] Setting addon metrics-server=true in "embed-certs-415155"
	W0919 17:53:06.006026   45696 addons.go:240] addon metrics-server should already be in state true
	I0919 17:53:06.006071   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.006331   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006328   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006364   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006396   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006451   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006493   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.023141   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43557
	I0919 17:53:06.023485   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0919 17:53:06.023646   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0919 17:53:06.023657   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.023882   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024040   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024209   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024230   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024333   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024358   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024616   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024697   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024810   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024827   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.025260   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.025301   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.025486   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.025695   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.026032   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.026062   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.044712   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I0919 17:53:06.045176   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.045627   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.045646   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.045976   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.046161   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.047603   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.049519   45696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:53:06.047878   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0919 17:53:06.052909   45696 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.052922   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:53:06.052937   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.053277   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.053868   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.053887   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.054337   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.054580   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.056666   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.056710   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.058604   45696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:53:06.057084   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.057313   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.060027   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.060046   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:53:06.060060   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:53:06.060079   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.060210   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.060497   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.060815   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.062794   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063165   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.063196   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063327   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.063475   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.063593   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.063701   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.066891   45696 addons.go:231] Setting addon default-storageclass=true in "embed-certs-415155"
	W0919 17:53:06.066905   45696 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:53:06.066926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.066965   45696 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-415155" context rescaled to 1 replicas
	I0919 17:53:06.066987   45696 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.6 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:53:06.068622   45696 out.go:177] * Verifying Kubernetes components...
	I0919 17:53:06.067176   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.070241   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.070253   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:06.085010   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0919 17:53:06.085392   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.085940   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.085976   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.086322   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.086774   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.086820   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.101494   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0919 17:53:06.101938   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.102528   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.102552   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.103014   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.103256   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.104793   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.105087   45696 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.105107   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:53:06.105127   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.107742   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108073   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.108105   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108336   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.108547   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.108744   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.108908   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.205454   45696 node_ready.go:35] waiting up to 6m0s for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.205565   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:53:06.225929   45696 node_ready.go:49] node "embed-certs-415155" has status "Ready":"True"
	I0919 17:53:06.225949   45696 node_ready.go:38] duration metric: took 20.464817ms waiting for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.225957   45696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:06.251954   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:53:06.251981   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:53:06.269198   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.296923   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.314108   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:53:06.314141   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:53:06.338106   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:06.378123   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:06.378154   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:53:06.492313   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:08.235564   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.029959877s)
	I0919 17:53:08.235599   45696 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0919 17:53:08.597917   45696 pod_ready.go:102] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"False"
	I0919 17:53:08.741920   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.44495643s)
	I0919 17:53:08.741982   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.741995   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.741926   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.472691573s)
	I0919 17:53:08.742031   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742050   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742377   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742393   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742403   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742413   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742492   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.742542   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742555   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742566   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742576   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742617   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742630   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742643   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742651   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742771   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742785   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.744274   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.744297   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818418   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.326058126s)
	I0919 17:53:08.818472   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818486   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.818839   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.818891   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.818927   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818938   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818948   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.820442   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.820464   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.820474   45696 addons.go:467] Verifying addon metrics-server=true in "embed-certs-415155"
	I0919 17:53:08.820479   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.822508   45696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 17:53:08.824220   45696 addons.go:502] enable addons completed in 2.818433307s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 17:53:10.561437   45696 pod_ready.go:92] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.561462   45696 pod_ready.go:81] duration metric: took 4.223330172s waiting for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.561472   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568541   45696 pod_ready.go:92] pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.568566   45696 pod_ready.go:81] duration metric: took 7.086927ms waiting for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568579   45696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577684   45696 pod_ready.go:92] pod "etcd-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.577709   45696 pod_ready.go:81] duration metric: took 9.120912ms waiting for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577722   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585005   45696 pod_ready.go:92] pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.585033   45696 pod_ready.go:81] duration metric: took 7.302173ms waiting for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585043   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590934   45696 pod_ready.go:92] pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.590951   45696 pod_ready.go:81] duration metric: took 5.90203ms waiting for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590960   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358510   45696 pod_ready.go:92] pod "kube-proxy-b75j2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.358535   45696 pod_ready.go:81] duration metric: took 767.569086ms waiting for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358544   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759839   45696 pod_ready.go:92] pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.759863   45696 pod_ready.go:81] duration metric: took 401.313058ms waiting for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759872   45696 pod_ready.go:38] duration metric: took 5.533896789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:11.759887   45696 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:53:11.759933   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:53:11.773700   45696 api_server.go:72] duration metric: took 5.706687251s to wait for apiserver process to appear ...
	I0919 17:53:11.773730   45696 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:53:11.773747   45696 api_server.go:253] Checking apiserver healthz at https://192.168.50.6:8443/healthz ...
	I0919 17:53:11.784435   45696 api_server.go:279] https://192.168.50.6:8443/healthz returned 200:
	ok
	I0919 17:53:11.785929   45696 api_server.go:141] control plane version: v1.28.2
	I0919 17:53:11.785952   45696 api_server.go:131] duration metric: took 12.214361ms to wait for apiserver health ...
	I0919 17:53:11.785971   45696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:53:11.961906   45696 system_pods.go:59] 9 kube-system pods found
	I0919 17:53:11.961937   45696 system_pods.go:61] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:11.961945   45696 system_pods.go:61] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:11.961952   45696 system_pods.go:61] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:11.961959   45696 system_pods.go:61] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:11.961967   45696 system_pods.go:61] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:11.961973   45696 system_pods.go:61] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:11.961981   45696 system_pods.go:61] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:11.961991   45696 system_pods.go:61] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:11.962003   45696 system_pods.go:61] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:11.962013   45696 system_pods.go:74] duration metric: took 176.035985ms to wait for pod list to return data ...
	I0919 17:53:11.962027   45696 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:53:12.157305   45696 default_sa.go:45] found service account: "default"
	I0919 17:53:12.157328   45696 default_sa.go:55] duration metric: took 195.295342ms for default service account to be created ...
	I0919 17:53:12.157336   45696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:53:12.359884   45696 system_pods.go:86] 9 kube-system pods found
	I0919 17:53:12.359910   45696 system_pods.go:89] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:12.359916   45696 system_pods.go:89] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:12.359920   45696 system_pods.go:89] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:12.359924   45696 system_pods.go:89] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:12.359929   45696 system_pods.go:89] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:12.359932   45696 system_pods.go:89] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:12.359936   45696 system_pods.go:89] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:12.359943   45696 system_pods.go:89] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:12.359948   45696 system_pods.go:89] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:12.359956   45696 system_pods.go:126] duration metric: took 202.614357ms to wait for k8s-apps to be running ...
	I0919 17:53:12.359962   45696 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:53:12.359999   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:12.373545   45696 system_svc.go:56] duration metric: took 13.572497ms WaitForService to wait for kubelet.
	I0919 17:53:12.373579   45696 kubeadm.go:581] duration metric: took 6.30657382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:53:12.373607   45696 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:53:12.557409   45696 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:53:12.557435   45696 node_conditions.go:123] node cpu capacity is 2
	I0919 17:53:12.557444   45696 node_conditions.go:105] duration metric: took 183.83246ms to run NodePressure ...
	I0919 17:53:12.557455   45696 start.go:228] waiting for startup goroutines ...
	I0919 17:53:12.557461   45696 start.go:233] waiting for cluster config update ...
	I0919 17:53:12.557469   45696 start.go:242] writing updated cluster config ...
	I0919 17:53:12.557699   45696 ssh_runner.go:195] Run: rm -f paused
	I0919 17:53:12.605145   45696 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:53:12.607197   45696 out.go:177] * Done! kubectl is now configured to use "embed-certs-415155" cluster and "default" namespace by default
	I0919 17:53:11.412630   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:14.488732   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:20.564623   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:23.636680   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:29.716717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:32.788701   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:38.868669   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:41.940647   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:48.020643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:51.092656   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:57.172691   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:00.244719   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:03.245602   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:03.245640   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:03.247321   47798 machine.go:91] provisioned docker machine in 4m37.423277683s
	I0919 17:54:03.247365   47798 fix.go:56] fixHost completed within 4m37.445374366s
	I0919 17:54:03.247373   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 4m37.445391375s
	W0919 17:54:03.247389   47798 start.go:688] error starting host: provision: host is not running
	W0919 17:54:03.247488   47798 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0919 17:54:03.247503   47798 start.go:703] Will try again in 5 seconds ...
	I0919 17:54:08.249214   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:54:08.249335   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 79.973µs
	I0919 17:54:08.249367   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:54:08.249377   47798 fix.go:54] fixHost starting: 
	I0919 17:54:08.249707   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:54:08.249734   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:54:08.264866   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I0919 17:54:08.265315   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:54:08.265726   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:54:08.265759   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:54:08.266072   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:54:08.266269   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:08.266419   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:54:08.267941   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Stopped err=<nil>
	I0919 17:54:08.267960   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	W0919 17:54:08.268118   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:54:08.269915   47798 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-100627" ...
	I0919 17:54:08.271210   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Start
	I0919 17:54:08.271445   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring networks are active...
	I0919 17:54:08.272016   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network default is active
	I0919 17:54:08.272329   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network mk-old-k8s-version-100627 is active
	I0919 17:54:08.272743   47798 main.go:141] libmachine: (old-k8s-version-100627) Getting domain xml...
	I0919 17:54:08.273350   47798 main.go:141] libmachine: (old-k8s-version-100627) Creating domain...
	I0919 17:54:09.557879   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting to get IP...
	I0919 17:54:09.558718   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.559190   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.559270   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.559173   48693 retry.go:31] will retry after 309.613104ms: waiting for machine to come up
	I0919 17:54:09.870868   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.871472   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.871496   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.871435   48693 retry.go:31] will retry after 375.744574ms: waiting for machine to come up
	I0919 17:54:10.249255   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.249750   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.249780   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.249702   48693 retry.go:31] will retry after 305.257713ms: waiting for machine to come up
	I0919 17:54:10.556042   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.556587   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.556621   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.556510   48693 retry.go:31] will retry after 394.207165ms: waiting for machine to come up
	I0919 17:54:10.952178   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.952797   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.952828   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.952732   48693 retry.go:31] will retry after 706.704251ms: waiting for machine to come up
	I0919 17:54:11.660566   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:11.661038   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:11.661061   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:11.660988   48693 retry.go:31] will retry after 924.155076ms: waiting for machine to come up
	I0919 17:54:12.586278   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:12.586772   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:12.586805   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:12.586721   48693 retry.go:31] will retry after 1.035300526s: waiting for machine to come up
	I0919 17:54:13.623123   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:13.623597   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:13.623622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:13.623562   48693 retry.go:31] will retry after 1.060639157s: waiting for machine to come up
	I0919 17:54:14.685531   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:14.686012   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:14.686044   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:14.685973   48693 retry.go:31] will retry after 1.61320677s: waiting for machine to come up
	I0919 17:54:16.301447   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:16.301908   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:16.301957   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:16.301864   48693 retry.go:31] will retry after 2.031293541s: waiting for machine to come up
	I0919 17:54:18.334791   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:18.335384   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:18.335440   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:18.335329   48693 retry.go:31] will retry after 1.861837572s: waiting for machine to come up
	I0919 17:54:20.199546   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:20.200058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:20.200088   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:20.200009   48693 retry.go:31] will retry after 2.332364238s: waiting for machine to come up
	I0919 17:54:22.533654   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:22.534131   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:22.534162   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:22.534071   48693 retry.go:31] will retry after 4.475201998s: waiting for machine to come up
	I0919 17:54:27.013553   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014052   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has current primary IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014075   47798 main.go:141] libmachine: (old-k8s-version-100627) Found IP for machine: 192.168.72.182
	I0919 17:54:27.014091   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserving static IP address...
	I0919 17:54:27.014512   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.014535   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | skip adding static IP to network mk-old-k8s-version-100627 - found existing host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"}
	I0919 17:54:27.014560   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserved static IP address: 192.168.72.182
	I0919 17:54:27.014579   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting for SSH to be available...
	I0919 17:54:27.014592   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Getting to WaitForSSH function...
	I0919 17:54:27.016929   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017394   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.017431   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017594   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH client type: external
	I0919 17:54:27.017634   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa (-rw-------)
	I0919 17:54:27.017678   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:54:27.017700   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | About to run SSH command:
	I0919 17:54:27.017711   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | exit 0
	I0919 17:54:27.112557   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | SSH cmd err, output: <nil>: 
	I0919 17:54:27.112933   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:54:27.113574   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.116176   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116556   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.116581   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116841   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:54:27.117019   47798 machine.go:88] provisioning docker machine ...
	I0919 17:54:27.117036   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:27.117261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117429   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:54:27.117447   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117599   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.119667   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.119987   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.120020   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.120131   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.120278   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120442   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120625   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.120795   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.121114   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.121128   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:54:27.264601   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100627
	
	I0919 17:54:27.264628   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.267433   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.267871   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.267906   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.268044   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.268260   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268459   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268589   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.268764   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.269227   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.269258   47798 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-100627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-100627/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-100627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:54:27.408513   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:27.408544   47798 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:54:27.408566   47798 buildroot.go:174] setting up certificates
	I0919 17:54:27.408590   47798 provision.go:83] configureAuth start
	I0919 17:54:27.408607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.408923   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.411896   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412345   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.412376   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412595   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.414909   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415293   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.415331   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415417   47798 provision.go:138] copyHostCerts
	I0919 17:54:27.415479   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:54:27.415491   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:54:27.415556   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:54:27.415662   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:54:27.415675   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:54:27.415721   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:54:27.415941   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:54:27.415954   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:54:27.415990   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:54:27.416043   47798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-100627 san=[192.168.72.182 192.168.72.182 localhost 127.0.0.1 minikube old-k8s-version-100627]
	I0919 17:54:27.473903   47798 provision.go:172] copyRemoteCerts
	I0919 17:54:27.473953   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:54:27.473978   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.476857   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477234   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.477272   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.477649   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.477818   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.477957   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:27.578694   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:54:27.603580   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:54:27.629314   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:54:27.653764   47798 provision.go:86] duration metric: configureAuth took 245.159127ms
	I0919 17:54:27.653788   47798 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:54:27.653989   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:54:27.654081   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.656608   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.657113   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657286   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.657453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657605   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657785   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.657972   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.658276   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.658292   47798 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:54:28.000190   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:54:28.000238   47798 machine.go:91] provisioned docker machine in 883.206741ms
	I0919 17:54:28.000251   47798 start.go:300] post-start starting for "old-k8s-version-100627" (driver="kvm2")
	I0919 17:54:28.000265   47798 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:54:28.000288   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.000617   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:54:28.000650   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.003541   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.003980   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.004027   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.004182   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.004383   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.004583   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.004749   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.099219   47798 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:54:28.103738   47798 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:54:28.103766   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:54:28.103853   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:54:28.103953   47798 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:54:28.104066   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:54:28.115827   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:28.139080   47798 start.go:303] post-start completed in 138.802144ms
	I0919 17:54:28.139102   47798 fix.go:56] fixHost completed within 19.88972528s
	I0919 17:54:28.139121   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.141760   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142169   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.142195   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142396   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.142607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142726   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142917   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.143114   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:28.143573   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:28.143592   47798 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:54:28.277495   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695146068.223192427
	
	I0919 17:54:28.277520   47798 fix.go:206] guest clock: 1695146068.223192427
	I0919 17:54:28.277530   47798 fix.go:219] Guest: 2023-09-19 17:54:28.223192427 +0000 UTC Remote: 2023-09-19 17:54:28.139105122 +0000 UTC m=+302.480491248 (delta=84.087305ms)
	I0919 17:54:28.277553   47798 fix.go:190] guest clock delta is within tolerance: 84.087305ms
	I0919 17:54:28.277559   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 20.02820818s
	I0919 17:54:28.277581   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.277863   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:28.280976   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281274   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.281314   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281491   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282065   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282362   47798 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:54:28.282425   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.282518   47798 ssh_runner.go:195] Run: cat /version.json
	I0919 17:54:28.282557   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.285235   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285574   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285626   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.285660   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285758   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.285980   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286009   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.286037   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.286133   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286185   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.286298   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.286345   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286479   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286613   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.377342   47798 ssh_runner.go:195] Run: systemctl --version
	I0919 17:54:28.402900   47798 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:54:28.551979   47798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:54:28.558949   47798 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:54:28.559040   47798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:54:28.574671   47798 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:54:28.574707   47798 start.go:469] detecting cgroup driver to use...
	I0919 17:54:28.574789   47798 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:54:28.589301   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:54:28.603381   47798 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:54:28.603456   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:54:28.616574   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:54:28.630029   47798 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:54:28.735665   47798 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:54:28.855576   47798 docker.go:212] disabling docker service ...
	I0919 17:54:28.855656   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:54:28.869977   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:54:28.883344   47798 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:54:29.010033   47798 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:54:29.123737   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:54:29.136560   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:54:29.153418   47798 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:54:29.153472   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.164328   47798 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:54:29.164376   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.175468   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.186361   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.197606   47798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:54:29.209144   47798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:54:29.219566   47798 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:54:29.219608   47798 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:54:29.232771   47798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:54:29.241491   47798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:54:29.363253   47798 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:54:29.564774   47798 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:54:29.564853   47798 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:54:29.570170   47798 start.go:537] Will wait 60s for crictl version
	I0919 17:54:29.570236   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:29.574361   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:54:29.613496   47798 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:54:29.613591   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.668331   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.724060   47798 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0919 17:54:29.725565   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:29.728603   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729060   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:29.729090   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729325   47798 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0919 17:54:29.733860   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:29.745878   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:54:29.745937   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:29.783853   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:29.783912   47798 ssh_runner.go:195] Run: which lz4
	I0919 17:54:29.787843   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:54:29.792095   47798 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:54:29.792124   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0919 17:54:31.578682   47798 crio.go:444] Took 1.790863 seconds to copy over tarball
	I0919 17:54:31.578766   47798 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:54:34.491190   47798 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.912396501s)
	I0919 17:54:34.491218   47798 crio.go:451] Took 2.912514 seconds to extract the tarball
	I0919 17:54:34.491227   47798 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:54:34.532896   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:34.584238   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:34.584259   47798 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:54:34.584318   47798 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.584343   47798 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:54:34.584357   47798 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.584378   47798 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.584540   47798 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.584551   47798 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.584565   47798 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.584321   47798 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.586253   47798 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.586228   47798 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.586234   47798 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:54:34.586352   47798 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.586266   47798 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586581   47798 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.759785   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0919 17:54:34.802920   47798 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0919 17:54:34.802955   47798 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0919 17:54:34.803013   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:34.807458   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0919 17:54:34.847013   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0919 17:54:34.847128   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852501   47798 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0919 17:54:34.852523   47798 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852579   47798 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.853807   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.857117   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.858504   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.859676   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.868306   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.920560   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:35.645907   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:37.386271   47798 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.533664793s)
	I0919 17:54:37.386302   47798 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0919 17:54:37.386337   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2: (2.532490506s)
	I0919 17:54:37.386377   47798 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0919 17:54:37.386391   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0: (2.529252811s)
	I0919 17:54:37.386410   47798 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.386437   47798 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0919 17:54:37.386458   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386462   47798 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.386469   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0: (2.527943734s)
	I0919 17:54:37.386508   47798 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0919 17:54:37.386516   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386529   47798 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.386549   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0: (2.526835511s)
	I0919 17:54:37.386581   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0: (2.518230422s)
	I0919 17:54:37.386605   47798 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0919 17:54:37.386609   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0: (2.466014033s)
	I0919 17:54:37.386609   47798 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0919 17:54:37.386628   47798 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.386629   47798 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.386638   47798 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0919 17:54:37.386566   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386662   47798 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.386765   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386701   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.740765346s)
	I0919 17:54:37.399029   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.399077   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.399121   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.399122   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.402150   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.402313   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.540994   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0919 17:54:37.541026   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0919 17:54:37.541059   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0919 17:54:37.541106   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0919 17:54:37.541145   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0919 17:54:37.549028   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0919 17:54:37.549081   47798 cache_images.go:92] LoadImages completed in 2.964810789s
	W0919 17:54:37.549147   47798 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0919 17:54:37.549230   47798 ssh_runner.go:195] Run: crio config
	I0919 17:54:37.603915   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:37.603954   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:37.603977   47798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:54:37.604007   47798 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100627 NodeName:old-k8s-version-100627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 17:54:37.604180   47798 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-100627"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-100627
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.182:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:54:37.604310   47798 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-100627 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:54:37.604383   47798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0919 17:54:37.614235   47798 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:54:37.614296   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:54:37.623423   47798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0919 17:54:37.640384   47798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:54:37.656081   47798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0919 17:54:37.672787   47798 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0919 17:54:37.676417   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:37.687828   47798 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627 for IP: 192.168.72.182
	I0919 17:54:37.687874   47798 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:54:37.688058   47798 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:54:37.688143   47798 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:54:37.688222   47798 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.key
	I0919 17:54:37.688279   47798 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032
	I0919 17:54:37.688322   47798 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key
	I0919 17:54:37.688488   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:54:37.688531   47798 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:54:37.688546   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:54:37.688579   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:54:37.688609   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:54:37.688636   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:54:37.688697   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:37.689406   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:54:37.714671   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:54:37.737884   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:54:37.761839   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:54:37.784692   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:54:37.810865   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:54:37.832897   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:54:37.856026   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:54:37.879335   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:54:37.902377   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:54:37.924388   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:54:37.948816   47798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:54:37.965669   47798 ssh_runner.go:195] Run: openssl version
	I0919 17:54:37.971227   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:54:37.983269   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988756   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988807   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.994392   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:54:38.006098   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:54:38.017868   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022601   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022655   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.028421   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:54:38.039288   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:54:38.053131   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057881   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057938   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.063816   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:54:38.074972   47798 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:54:38.080260   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:54:38.085942   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:54:38.091638   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:54:38.097282   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:54:38.103194   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:54:38.109759   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:54:38.115202   47798 kubeadm.go:404] StartCluster: {Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:54:38.115274   47798 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:54:38.115313   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:38.153988   47798 cri.go:89] found id: ""
	I0919 17:54:38.154063   47798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:54:38.164888   47798 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:54:38.164913   47798 kubeadm.go:636] restartCluster start
	I0919 17:54:38.164965   47798 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:54:38.174810   47798 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.175856   47798 kubeconfig.go:92] found "old-k8s-version-100627" server: "https://192.168.72.182:8443"
	I0919 17:54:38.178372   47798 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:54:38.187917   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.187969   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.199654   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.199674   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.199715   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.211155   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.712221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.712312   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.725306   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.211431   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.211494   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.223919   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.711400   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.711482   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.724103   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.211311   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.211379   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.224111   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.711529   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.711609   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.724291   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.212183   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.212285   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.225226   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.711742   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.711821   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.724590   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.212221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.212289   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.225772   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.711304   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.711378   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.724468   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.211895   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.211978   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.225017   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.711734   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.711824   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.724995   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.211535   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.211616   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.224372   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.712113   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.712179   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.725330   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.211942   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.212027   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.226290   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.712216   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.712295   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.725065   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.212053   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.212150   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.226417   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.711997   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.712082   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.725608   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.212214   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.212300   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.224935   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.711452   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.711540   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.723970   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:48.188749   47798 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:54:48.188785   47798 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:54:48.188800   47798 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 17:54:48.188862   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:48.227729   47798 cri.go:89] found id: ""
	I0919 17:54:48.227789   47798 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:54:48.243618   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:54:48.253221   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:54:48.253285   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262806   47798 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262831   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:48.405093   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.114151   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.324152   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.457833   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.554530   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:54:49.554595   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:49.568050   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.092864   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.592484   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.092979   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.114757   47798 api_server.go:72] duration metric: took 1.560225697s to wait for apiserver process to appear ...
	I0919 17:54:51.114781   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:54:51.114800   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:56.115914   47798 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 17:54:56.115962   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:57.769883   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:54:57.769915   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:54:58.270598   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.278169   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.278210   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:58.770880   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.778649   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.778679   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:59.270233   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:59.276275   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 17:54:59.283868   47798 api_server.go:141] control plane version: v1.16.0
	I0919 17:54:59.283896   47798 api_server.go:131] duration metric: took 8.169106612s to wait for apiserver health ...
	I0919 17:54:59.283908   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:59.283916   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:59.285960   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:54:59.287537   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:54:59.298142   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:54:59.315861   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:54:59.324878   47798 system_pods.go:59] 8 kube-system pods found
	I0919 17:54:59.324917   47798 system_pods.go:61] "coredns-5644d7b6d9-4mh4f" [382ef590-a6ef-4402-8762-1649f060fbc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324940   47798 system_pods.go:61] "coredns-5644d7b6d9-wqwp7" [8756ca49-2953-422d-a534-6d1fa5655fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324947   47798 system_pods.go:61] "etcd-old-k8s-version-100627" [1e7bdb28-9c7e-4cae-a87e-ec2fad64e820] Running
	I0919 17:54:59.324955   47798 system_pods.go:61] "kube-apiserver-old-k8s-version-100627" [59a703b6-7c16-48ba-8a78-c1ecd606f138] Running
	I0919 17:54:59.324966   47798 system_pods.go:61] "kube-controller-manager-old-k8s-version-100627" [ac10d741-9a7d-45a1-86f5-a912075b49b9] Running
	I0919 17:54:59.324971   47798 system_pods.go:61] "kube-proxy-j7kqn" [79381ec1-45a7-4424-8383-f97b530979d3] Running
	I0919 17:54:59.324986   47798 system_pods.go:61] "kube-scheduler-old-k8s-version-100627" [40df95ee-b184-48ff-b276-d01c7763c7fc] Running
	I0919 17:54:59.324993   47798 system_pods.go:61] "storage-provisioner" [00e5e0c9-0453-440b-aa5c-e6811f428297] Running
	I0919 17:54:59.325005   47798 system_pods.go:74] duration metric: took 9.119135ms to wait for pod list to return data ...
	I0919 17:54:59.325017   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:54:59.328813   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:54:59.328845   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 17:54:59.328859   47798 node_conditions.go:105] duration metric: took 3.833575ms to run NodePressure ...
	I0919 17:54:59.328879   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:59.658953   47798 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:54:59.662655   47798 retry.go:31] will retry after 352.037588ms: kubelet not initialised
	I0919 17:55:00.020425   47798 retry.go:31] will retry after 411.927656ms: kubelet not initialised
	I0919 17:55:00.438027   47798 retry.go:31] will retry after 483.370654ms: kubelet not initialised
	I0919 17:55:00.928598   47798 retry.go:31] will retry after 987.946924ms: kubelet not initialised
	I0919 17:55:01.923328   47798 retry.go:31] will retry after 1.679023275s: kubelet not initialised
	I0919 17:55:03.607494   47798 retry.go:31] will retry after 1.92599571s: kubelet not initialised
	I0919 17:55:05.539070   47798 retry.go:31] will retry after 2.735570072s: kubelet not initialised
	I0919 17:55:08.280198   47798 retry.go:31] will retry after 4.516491636s: kubelet not initialised
	I0919 17:55:12.803629   47798 retry.go:31] will retry after 9.24421999s: kubelet not initialised
	I0919 17:55:22.053509   47798 retry.go:31] will retry after 10.860983763s: kubelet not initialised
	I0919 17:55:32.921288   47798 retry.go:31] will retry after 19.590918142s: kubelet not initialised
	I0919 17:55:52.517612   47798 kubeadm.go:787] kubelet initialised
	I0919 17:55:52.517637   47798 kubeadm.go:788] duration metric: took 52.858662322s waiting for restarted kubelet to initialise ...
	I0919 17:55:52.517644   47798 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:55:52.523992   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530133   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.530151   47798 pod_ready.go:81] duration metric: took 6.127596ms waiting for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530160   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535186   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.535202   47798 pod_ready.go:81] duration metric: took 5.035759ms waiting for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535209   47798 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540300   47798 pod_ready.go:92] pod "etcd-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.540317   47798 pod_ready.go:81] duration metric: took 5.101572ms waiting for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540324   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546670   47798 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.546687   47798 pod_ready.go:81] duration metric: took 6.356984ms waiting for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546696   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916320   47798 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.916342   47798 pod_ready.go:81] duration metric: took 369.639886ms waiting for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916353   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316733   47798 pod_ready.go:92] pod "kube-proxy-j7kqn" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.316762   47798 pod_ready.go:81] duration metric: took 400.400609ms waiting for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316788   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717319   47798 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.717344   47798 pod_ready.go:81] duration metric: took 400.544097ms waiting for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717358   47798 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:56.023621   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:55:58.025543   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:00.522985   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:02.523350   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:05.022971   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:07.023767   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:09.524598   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:12.024269   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:14.524109   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:16.525347   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:19.025990   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:21.522712   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:23.523098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:25.525823   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:27.526575   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:30.023751   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:32.023914   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:34.523709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:37.025284   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:39.523886   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:42.023525   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:44.023602   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:46.524942   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:49.023162   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:51.025968   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:53.523737   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:55.524950   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:58.023648   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:00.024635   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:02.024981   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:04.524374   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:07.024495   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:09.523646   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:12.023778   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:14.024012   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:16.024668   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:18.524581   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:20.525264   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:23.024223   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:25.024271   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:27.024863   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:29.524389   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:31.524867   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:34.026361   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:36.523516   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:38.523641   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:40.525417   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:43.023938   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:45.024235   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:47.025554   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:49.524344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:52.023880   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:54.024324   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:56.024615   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:58.523806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:00.524330   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:02.524813   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:05.023667   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:07.024328   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:09.521983   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:11.524126   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:14.033167   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:16.524193   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:19.023478   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:21.023719   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:23.024876   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:25.525000   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:28.022897   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:30.023651   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:32.523506   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:35.023201   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:37.024229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:39.522709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:41.524752   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:44.022121   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:46.025229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:48.523728   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:50.524600   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:53.024769   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:55.523745   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:58.025806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:00.524396   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:03.023037   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:05.023335   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:07.024052   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:09.024205   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:11.523020   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:13.524065   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:16.025098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:18.523293   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:20.525391   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:23.025049   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:25.522619   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:27.525208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:30.024344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:32.024984   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:34.523267   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:36.524365   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:39.023558   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:41.523208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:43.524139   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:46.023918   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:48.523431   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:50.523998   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.024150   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.718434   47798 pod_ready.go:81] duration metric: took 4m0.001059167s waiting for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	E0919 17:59:53.718466   47798 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:59:53.718484   47798 pod_ready.go:38] duration metric: took 4m1.200831266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:59:53.718520   47798 kubeadm.go:640] restartCluster took 5m15.553599416s
	W0919 17:59:53.718575   47798 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:59:53.718604   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:59:58.500835   47798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.782205666s)
	I0919 17:59:58.500900   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:59:58.514207   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:59:58.524054   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:59:58.532896   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:59:58.532945   47798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 17:59:58.588089   47798 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0919 17:59:58.588197   47798 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:59:58.739994   47798 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:59:58.740116   47798 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:59:58.740291   47798 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:59:58.968628   47798 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:59:58.968805   47798 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:59:58.977284   47798 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0919 17:59:59.111196   47798 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:59:59.113466   47798 out.go:204]   - Generating certificates and keys ...
	I0919 17:59:59.113599   47798 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:59:59.113711   47798 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:59:59.113854   47798 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:59:59.113938   47798 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:59:59.114070   47798 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:59:59.114144   47798 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:59:59.114911   47798 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:59:59.115382   47798 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:59:59.115986   47798 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:59:59.116548   47798 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:59:59.116630   47798 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:59:59.116713   47798 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:59:59.334495   47798 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:59:59.627886   47798 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:59:59.967368   47798 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:00:00.114260   47798 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:00:00.115507   47798 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:00:00.117811   47798 out.go:204]   - Booting up control plane ...
	I0919 18:00:00.117935   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:00:00.122651   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:00:00.125112   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:00:00.126687   47798 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:00:00.129807   47798 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 18:00:11.635043   47798 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504905 seconds
	I0919 18:00:11.635206   47798 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:00:11.654058   47798 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:00:12.194702   47798 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:00:12.194899   47798 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-100627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 18:00:12.704504   47798 kubeadm.go:322] [bootstrap-token] Using token: exrkug.z0q4aqb4emd0lkvm
	I0919 18:00:12.706136   47798 out.go:204]   - Configuring RBAC rules ...
	I0919 18:00:12.706241   47798 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:00:12.721292   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:00:12.729553   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:00:12.735434   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:00:12.739232   47798 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:00:12.816288   47798 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 18:00:13.140789   47798 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 18:00:13.142170   47798 kubeadm.go:322] 
	I0919 18:00:13.142257   47798 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 18:00:13.142268   47798 kubeadm.go:322] 
	I0919 18:00:13.142338   47798 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 18:00:13.142348   47798 kubeadm.go:322] 
	I0919 18:00:13.142382   47798 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 18:00:13.142468   47798 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:00:13.142554   47798 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:00:13.142571   47798 kubeadm.go:322] 
	I0919 18:00:13.142642   47798 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 18:00:13.142734   47798 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:00:13.142826   47798 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:00:13.142841   47798 kubeadm.go:322] 
	I0919 18:00:13.142952   47798 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0919 18:00:13.143062   47798 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 18:00:13.143073   47798 kubeadm.go:322] 
	I0919 18:00:13.143177   47798 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143336   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 18:00:13.143374   47798 kubeadm.go:322]     --control-plane 	  
	I0919 18:00:13.143387   47798 kubeadm.go:322] 
	I0919 18:00:13.143501   47798 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:00:13.143511   47798 kubeadm.go:322] 
	I0919 18:00:13.143613   47798 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143744   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 18:00:13.144341   47798 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:00:13.144373   47798 cni.go:84] Creating CNI manager for ""
	I0919 18:00:13.144392   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:00:13.146075   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:00:13.148011   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:00:13.159265   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 18:00:13.178271   47798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:00:13.178388   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.178420   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=old-k8s-version-100627 minikube.k8s.io/updated_at=2023_09_19T18_00_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.212392   47798 ops.go:34] apiserver oom_adj: -16
	I0919 18:00:13.509743   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.611752   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.210418   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.710689   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.210316   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.710515   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.210852   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.710451   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.210179   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.710559   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.210390   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.710683   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.210573   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.710581   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.210732   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.710461   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.210702   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.709813   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.209903   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.709847   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.210276   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.710692   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.210645   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.710835   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.209793   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.710473   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.209945   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.710136   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.210552   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.710679   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.209990   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.365531   47798 kubeadm.go:1081] duration metric: took 15.187210441s to wait for elevateKubeSystemPrivileges.
	I0919 18:00:28.365564   47798 kubeadm.go:406] StartCluster complete in 5m50.250366407s
	I0919 18:00:28.365586   47798 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.365675   47798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:00:28.368279   47798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.368566   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:00:28.368696   47798 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 18:00:28.368769   47798 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368797   47798 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-100627"
	I0919 18:00:28.368803   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 18:00:28.368850   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368863   47798 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368878   47798 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-100627"
	W0919 18:00:28.368886   47798 addons.go:240] addon metrics-server should already be in state true
	I0919 18:00:28.368922   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368851   47798 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368982   47798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100627"
	I0919 18:00:28.369268   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369273   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369292   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369294   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369392   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369412   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.389023   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0919 18:00:28.389631   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.389718   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
	I0919 18:00:28.390023   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390257   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0919 18:00:28.390523   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390547   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390646   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390895   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391311   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391391   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.391418   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.391709   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391712   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391748   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391757   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391791   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391838   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.410811   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0919 18:00:28.410846   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0919 18:00:28.411329   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411366   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411777   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411796   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.411888   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411905   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.412177   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412219   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412326   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.412402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.414149   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.417333   47798 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 18:00:28.414621   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.419038   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:00:28.419051   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:00:28.419071   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.420833   47798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:00:28.422332   47798 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.422358   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:00:28.422378   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.422103   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.422902   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.422992   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.423016   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.423112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.423305   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.423474   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.425328   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425845   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.425869   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425895   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.426078   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.426219   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.426322   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.464699   47798 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-100627"
	I0919 18:00:28.464737   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.465028   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.465059   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.479442   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0919 18:00:28.479839   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.480266   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.480294   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.480676   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.481211   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.481248   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.495810   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0919 18:00:28.496299   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.496709   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.496740   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.497099   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.497375   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.499150   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.499406   47798 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.499420   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:00:28.499434   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.502227   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.502653   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502792   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.502961   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.503112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.503256   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.738306   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:00:28.738334   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 18:00:28.739481   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.753537   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.807289   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:00:28.807321   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:00:28.904080   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:28.904107   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:00:28.991114   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:29.327327   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:00:29.371292   47798 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-100627" context rescaled to 1 replicas
	I0919 18:00:29.371337   47798 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:00:29.373222   47798 out.go:177] * Verifying Kubernetes components...
	I0919 18:00:29.374912   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:00:30.105746   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366227457s)
	I0919 18:00:30.105776   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.352204878s)
	I0919 18:00:30.105793   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105805   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.105814   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105827   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106180   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 18:00:30.106222   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106236   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106246   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106259   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106357   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106373   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106396   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106408   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106486   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106500   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106513   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106522   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106592   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106602   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106826   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106842   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.185977   47798 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0919 18:00:30.185980   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.194821805s)
	I0919 18:00:30.186035   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186031   47798 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.186049   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186367   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186383   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186393   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186647   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186671   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186681   47798 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-100627"
	I0919 18:00:30.188971   47798 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 18:00:30.190949   47798 addons.go:502] enable addons completed in 1.822257993s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 18:00:30.236503   47798 node_ready.go:49] node "old-k8s-version-100627" has status "Ready":"True"
	I0919 18:00:30.236526   47798 node_ready.go:38] duration metric: took 50.473068ms waiting for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.236538   47798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:30.243959   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:32.262563   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:34.263997   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:36.762957   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:37.763670   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.763694   47798 pod_ready.go:81] duration metric: took 7.519708991s waiting for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.763704   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769351   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.769371   47798 pod_ready.go:81] duration metric: took 5.660975ms waiting for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769382   47798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773846   47798 pod_ready.go:92] pod "kube-proxy-x7p9v" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.773866   47798 pod_ready.go:81] duration metric: took 4.476479ms waiting for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773879   47798 pod_ready.go:38] duration metric: took 7.537327576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:37.773896   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:00:37.773947   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:00:37.789245   47798 api_server.go:72] duration metric: took 8.417877969s to wait for apiserver process to appear ...
	I0919 18:00:37.789267   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:00:37.789283   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 18:00:37.796929   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 18:00:37.798217   47798 api_server.go:141] control plane version: v1.16.0
	I0919 18:00:37.798233   47798 api_server.go:131] duration metric: took 8.960108ms to wait for apiserver health ...
	I0919 18:00:37.798240   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:00:37.802732   47798 system_pods.go:59] 5 kube-system pods found
	I0919 18:00:37.802751   47798 system_pods.go:61] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.802755   47798 system_pods.go:61] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.802759   47798 system_pods.go:61] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.802765   47798 system_pods.go:61] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.802771   47798 system_pods.go:61] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.802775   47798 system_pods.go:74] duration metric: took 4.531294ms to wait for pod list to return data ...
	I0919 18:00:37.802781   47798 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:00:37.805090   47798 default_sa.go:45] found service account: "default"
	I0919 18:00:37.805108   47798 default_sa.go:55] duration metric: took 2.323003ms for default service account to be created ...
	I0919 18:00:37.805115   47798 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:00:37.809387   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:37.809412   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.809421   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.809428   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.809437   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.809445   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.809492   47798 retry.go:31] will retry after 308.50392ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.123229   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.123251   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.123256   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.123262   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.123271   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.123277   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.123291   47798 retry.go:31] will retry after 322.697394ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.452201   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.452227   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.452232   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.452236   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.452242   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.452248   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.452263   47798 retry.go:31] will retry after 457.851598ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.916270   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.916309   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.916318   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.916325   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.916336   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.916345   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.916367   47798 retry.go:31] will retry after 438.479707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:39.360169   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:39.360194   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:39.360199   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:39.360203   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:39.360210   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:39.360214   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:39.360228   47798 retry.go:31] will retry after 636.764599ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.002876   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.002902   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.002907   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.002911   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.002918   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.002922   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.002936   47798 retry.go:31] will retry after 763.456742ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.771715   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.771743   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.771751   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.771758   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.771768   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.771777   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.771794   47798 retry.go:31] will retry after 849.595493ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:41.628988   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:41.629014   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:41.629019   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:41.629024   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:41.629030   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:41.629035   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:41.629048   47798 retry.go:31] will retry after 1.130396523s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:42.765798   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:42.765825   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:42.765830   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:42.765834   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:42.765841   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:42.765846   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:42.765861   47798 retry.go:31] will retry after 1.444918771s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:44.216701   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:44.216726   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:44.216731   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:44.216735   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:44.216743   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:44.216751   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:44.216769   47798 retry.go:31] will retry after 2.010339666s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:46.233732   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:46.233764   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:46.233772   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:46.233779   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:46.233789   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:46.233798   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:46.233817   47798 retry.go:31] will retry after 2.386355588s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:48.625414   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:48.625451   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:48.625458   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:48.625463   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:48.625469   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:48.625478   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:48.625496   47798 retry.go:31] will retry after 3.40684833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:52.037490   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:52.037516   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:52.037522   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:52.037526   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:52.037532   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:52.037538   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:52.037553   47798 retry.go:31] will retry after 4.080274795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:56.123283   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:56.123307   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:56.123312   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:56.123316   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:56.123322   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:56.123327   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:56.123341   47798 retry.go:31] will retry after 4.076928493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:00.205817   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:00.205842   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:00.205848   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:00.205851   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:00.205860   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:00.205865   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:00.205880   47798 retry.go:31] will retry after 6.340158574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:06.551794   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:06.551821   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:06.551829   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:06.551835   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:06.551844   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:06.551852   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:06.551870   47798 retry.go:31] will retry after 8.178931758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:14.737898   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:14.737926   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:14.737934   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:14.737941   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:14.737947   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Pending
	I0919 18:01:14.737955   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:14.737961   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Pending
	I0919 18:01:14.737969   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:14.737977   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:14.737996   47798 retry.go:31] will retry after 7.690456991s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:22.435672   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:22.435706   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:22.435714   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:22.435721   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:22.435728   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Running
	I0919 18:01:22.435736   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:22.435744   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Running
	I0919 18:01:22.435755   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:22.435765   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:22.435782   47798 retry.go:31] will retry after 8.810480707s: missing components: kube-apiserver
	I0919 18:01:31.254171   47798 system_pods.go:86] 9 kube-system pods found
	I0919 18:01:31.254216   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:31.254223   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:31.254228   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:31.254233   47798 system_pods.go:89] "kube-apiserver-old-k8s-version-100627" [477571a2-c091-4d30-9c70-389556fade77] Running
	I0919 18:01:31.254240   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Running
	I0919 18:01:31.254246   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:31.254252   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Running
	I0919 18:01:31.254263   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:31.254278   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:31.254287   47798 system_pods.go:126] duration metric: took 53.449167375s to wait for k8s-apps to be running ...
	I0919 18:01:31.254295   47798 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:01:31.254346   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:01:31.270302   47798 system_svc.go:56] duration metric: took 16.000049ms WaitForService to wait for kubelet.
	I0919 18:01:31.270329   47798 kubeadm.go:581] duration metric: took 1m1.898967343s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 18:01:31.270356   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:01:31.273300   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 18:01:31.273324   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 18:01:31.273334   47798 node_conditions.go:105] duration metric: took 2.973337ms to run NodePressure ...
	I0919 18:01:31.273344   47798 start.go:228] waiting for startup goroutines ...
	I0919 18:01:31.273349   47798 start.go:233] waiting for cluster config update ...
	I0919 18:01:31.273358   47798 start.go:242] writing updated cluster config ...
	I0919 18:01:31.273601   47798 ssh_runner.go:195] Run: rm -f paused
	I0919 18:01:31.321319   47798 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0919 18:01:31.323360   47798 out.go:177] 
	W0919 18:01:31.324777   47798 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0919 18:01:31.326209   47798 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0919 18:01:31.327585   47798 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-100627" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:47:38 UTC, ends at Tue 2023-09-19 18:02:14 UTC. --
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.951084425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=982aca11-71cf-4155-9947-5e6d2c052539 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.952375090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=717eac4d-eae5-458a-976f-ebecba4ab320 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.952780666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146533952766681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=717eac4d-eae5-458a-976f-ebecba4ab320 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.953608393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5c04a55-5e78-4f21-9662-3a47f959bc7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.953658603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5c04a55-5e78-4f21-9662-3a47f959bc7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.953830545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5c04a55-5e78-4f21-9662-3a47f959bc7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.972653084Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=9a971d6f-8a43-47e3-98eb-cb79173a5512 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.972850697Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:83e9eb53-dd92-4b84-a787-82bea5449cd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145989090466134,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-19T17:53:08.752643911Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3116641f38f40f948c119c87ee19bb6f6cf40173b4d08e84c690688614dd4eae,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-kdxsz,Uid:1588f0a7-18ae-402b-8916-e3a6423e9e15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145989001918237,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-kdxsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1588f0a7-18ae-402b-8916-e3a6423e9e1
5,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T17:53:08.633629390Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&PodSandboxMetadata{Name:kube-proxy-b75j2,Uid:7be05aae-86ca-4640-a0f3-6518e7896711,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145986688650089,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T17:53:06.047340278Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-2dbbk,Ui
d:93175ebd-b717-4c98-a56b-aca1404ac8bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145986648822510,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T17:53:06.313724921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-415155,Uid:45da9a46e3aeecd9e79ac27f306ed8bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145964715095469,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 45da9a46e3aeecd9e79ac27f306ed8bb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 45da9a46e3aeecd9e79ac27f306ed8bb,kubernetes.io/config.seen: 2023-09-19T17:52:44.176787564Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-415155,Uid:1670db7e0aab166d0e691d553d87d094,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145964691560741,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1670db7e0aab166d0e691d553d87d094,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1670db7e0aab166d0e691d553d87d094,kubernetes.io/config.seen: 2023-09-19T17:52:44.176788374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:35138525cd8bdd437
c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-415155,Uid:3620b65e9d6874920789e5c75788a548,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145964685217038,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.6:2379,kubernetes.io/config.hash: 3620b65e9d6874920789e5c75788a548,kubernetes.io/config.seen: 2023-09-19T17:52:44.176782693Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-415155,Uid:9653fce8177804cb18eaf9a2711eec14,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145964664580956,Labels:m
ap[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.6:8443,kubernetes.io/config.hash: 9653fce8177804cb18eaf9a2711eec14,kubernetes.io/config.seen: 2023-09-19T17:52:44.176786370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9a971d6f-8a43-47e3-98eb-cb79173a5512 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.974068478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=780690a7-5f0a-42db-8178-771cfedc164e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.974121058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=780690a7-5f0a-42db-8178-771cfedc164e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.974442938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=780690a7-5f0a-42db-8178-771cfedc164e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.993887916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8f0ff96f-2e05-4e6a-aa04-77a41351781c name=/runtime.v1.RuntimeService/Version
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.993961866Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8f0ff96f-2e05-4e6a-aa04-77a41351781c name=/runtime.v1.RuntimeService/Version
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.995678213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=022f2f33-f4c9-46bc-8dd0-51dd420c4652 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.996055353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146533996043876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=022f2f33-f4c9-46bc-8dd0-51dd420c4652 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.996742433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=35235146-feab-4e05-ad70-d808ecaca389 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.996837562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=35235146-feab-4e05-ad70-d808ecaca389 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:13 embed-certs-415155 crio[726]: time="2023-09-19 18:02:13.997060635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=35235146-feab-4e05-ad70-d808ecaca389 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.039213533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=385ea444-6aa7-402f-95ba-a90eb688e543 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.039356908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=385ea444-6aa7-402f-95ba-a90eb688e543 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.040485044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e7ade465-38de-4e06-a50f-367d464e4010 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.040872441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146534040859421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e7ade465-38de-4e06-a50f-367d464e4010 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.041715673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43b7da24-4375-485a-b3ff-19b68fb10341 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.041759169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43b7da24-4375-485a-b3ff-19b68fb10341 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:02:14 embed-certs-415155 crio[726]: time="2023-09-19 18:02:14.041923669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43b7da24-4375-485a-b3ff-19b68fb10341 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	261e128aa6ed6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   8cf13e6e546db       storage-provisioner
	2001e36377828       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   9 minutes ago       Running             kube-proxy                0                   f17fe50a94dc6       kube-proxy-b75j2
	25abbbb219d99       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   d2c40bdab0a72       coredns-5dd5756b68-2dbbk
	8a61c6dfc47ea       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   9 minutes ago       Running             kube-scheduler            2                   81c3c3b08917e       kube-scheduler-embed-certs-415155
	6bb0d00ed49b6       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   9 minutes ago       Running             kube-controller-manager   2                   2458503066ab0       kube-controller-manager-embed-certs-415155
	c5a6bd76fad6f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   35138525cd8bd       etcd-embed-certs-415155
	2af4bd79127b0       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   9 minutes ago       Running             kube-apiserver            2                   f665a7ec3953a       kube-apiserver-embed-certs-415155
	
	* 
	* ==> coredns [25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47583 - 18749 "HINFO IN 2139454996083767068.9066230539177473847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022823994s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-415155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-415155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=embed-certs-415155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_52_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:52:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-415155
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 18:02:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:58:19 +0000   Tue, 19 Sep 2023 17:52:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:58:19 +0000   Tue, 19 Sep 2023 17:52:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:58:19 +0000   Tue, 19 Sep 2023 17:52:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:58:19 +0000   Tue, 19 Sep 2023 17:53:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.6
	  Hostname:    embed-certs-415155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 62d0f53d77e049afa3581ea6927d1068
	  System UUID:                62d0f53d-77e0-49af-a358-1ea6927d1068
	  Boot ID:                    43753850-fc8d-4bdd-a3d9-720d7f34ce86
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2dbbk                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-415155                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-415155             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-415155    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-b75j2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-415155             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-kdxsz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m30s)  kubelet          Node embed-certs-415155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m30s)  kubelet          Node embed-certs-415155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m30s)  kubelet          Node embed-certs-415155 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node embed-certs-415155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node embed-certs-415155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node embed-certs-415155 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node embed-certs-415155 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s                  kubelet          Node embed-certs-415155 status is now: NodeReady
	  Normal  RegisteredNode           9m9s                   node-controller  Node embed-certs-415155 event: Registered Node embed-certs-415155 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073164] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.488255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.452779] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149057] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.509844] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.298094] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.159057] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.165127] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.117036] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.273733] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Sep19 17:48] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +20.060788] kauditd_printk_skb: 29 callbacks suppressed
	[Sep19 17:52] systemd-fstab-generator[3544]: Ignoring "noauto" for root device
	[  +8.762331] systemd-fstab-generator[3873]: Ignoring "noauto" for root device
	[Sep19 17:53] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.535789] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6] <==
	* {"level":"info","ts":"2023-09-19T17:52:46.906791Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T17:52:46.906915Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.6:2380"}
	{"level":"info","ts":"2023-09-19T17:52:47.545343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:47.545465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:47.545519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c received MsgPreVoteResp from 2b2e43cf24bcd38c at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:47.545569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:47.545604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c received MsgVoteResp from 2b2e43cf24bcd38c at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:47.545632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c became leader at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:47.545657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2b2e43cf24bcd38c elected leader 2b2e43cf24bcd38c at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:47.552409Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.556567Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2b2e43cf24bcd38c","local-member-attributes":"{Name:embed-certs-415155 ClientURLs:[https://192.168.50.6:2379]}","request-path":"/0/members/2b2e43cf24bcd38c/attributes","cluster-id":"2873be035e30d2ee","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:52:47.556833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:47.560274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:47.560346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:47.562345Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2873be035e30d2ee","local-member-id":"2b2e43cf24bcd38c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.562539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.562586Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.562618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:47.563564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.6:2379"}
	{"level":"info","ts":"2023-09-19T17:52:47.575847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-09-19T17:54:35.321844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.699941ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15243711321133089527 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:561 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-19T17:54:35.322357Z","caller":"traceutil/trace.go:171","msg":"trace[1501878281] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"376.017859ms","start":"2023-09-19T17:54:34.946132Z","end":"2023-09-19T17:54:35.322149Z","steps":["trace[1501878281] 'process raft request'  (duration: 119.844165ms)","trace[1501878281] 'compare'  (duration: 254.574507ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-19T17:54:35.322442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:54:34.946114Z","time spent":"376.286469ms","remote":"127.0.0.1:54616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:561 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-19T17:54:37.263749Z","caller":"traceutil/trace.go:171","msg":"trace[104613979] transaction","detail":"{read_only:false; response_revision:564; number_of_response:1; }","duration":"360.71259ms","start":"2023-09-19T17:54:36.903012Z","end":"2023-09-19T17:54:37.263725Z","steps":["trace[104613979] 'process raft request'  (duration: 360.496437ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:54:37.264086Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:54:36.902992Z","time spent":"360.942225ms","remote":"127.0.0.1:54594","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":804,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kdxsz.17865e4bfa56de80\" mod_revision:513 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kdxsz.17865e4bfa56de80\" value_size:709 lease:6020339284278313530 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kdxsz.17865e4bfa56de80\" > >"}
	
	* 
	* ==> kernel <==
	*  18:02:14 up 14 min,  0 users,  load average: 0.12, 0.19, 0.17
	Linux embed-certs-415155 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed] <==
	* W0919 17:57:50.218385       1 handler_proxy.go:93] no RequestInfo found in the context
	W0919 17:57:50.218403       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:57:50.218550       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 17:57:50.218672       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 17:57:50.218756       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:57:50.220619       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:58:49.107481       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 17:58:50.219054       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:58:50.219120       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 17:58:50.219136       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 17:58:50.220804       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 17:58:50.220911       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:58:50.220939       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:59:49.107412       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:00:49.107298       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:00:50.219752       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:00:50.219948       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:00:50.220038       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:00:50.221972       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:00:50.222045       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:00:50.222053       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:01:49.106963       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0] <==
	* I0919 17:56:35.759913       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:57:05.305048       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:57:05.767803       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:57:35.311422       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:57:35.779823       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:58:05.317841       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:58:05.789499       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:58:35.324148       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:58:35.799179       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 17:59:05.330132       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:59:05.809532       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 17:59:09.911733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="255.725µs"
	I0919 17:59:22.914026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="101.398µs"
	E0919 17:59:35.335411       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 17:59:35.818450       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:00:05.342960       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:00:05.830196       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:00:35.349524       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:00:35.839473       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:01:05.354795       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:01:05.848907       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:01:35.360887       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:01:35.857453       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:02:05.366160       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:02:05.865072       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3] <==
	* I0919 17:53:10.309577       1 server_others.go:69] "Using iptables proxy"
	I0919 17:53:10.330935       1 node.go:141] Successfully retrieved node IP: 192.168.50.6
	I0919 17:53:10.427565       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:53:10.432374       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:53:10.439685       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:53:10.439759       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:53:10.439928       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:53:10.439965       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:53:10.441022       1 config.go:188] "Starting service config controller"
	I0919 17:53:10.441093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:53:10.441119       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:53:10.441123       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:53:10.446868       1 config.go:315] "Starting node config controller"
	I0919 17:53:10.446995       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:53:10.541497       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:53:10.541615       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:53:10.547430       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd] <==
	* W0919 17:52:49.302528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:49.302535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:49.302577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:52:49.302585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 17:52:49.302626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:52:49.302636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:52:49.302688       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:49.302697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.128885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:50.128942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.160617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:52:50.160766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 17:52:50.172754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 17:52:50.172855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 17:52:50.181368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:52:50.181466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 17:52:50.231490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:50.231801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.398956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:50.399071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.445292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:52:50.445358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 17:52:50.630428       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 17:52:50.630483       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0919 17:52:52.475103       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:47:38 UTC, ends at Tue 2023-09-19 18:02:14 UTC. --
	Sep 19 17:59:35 embed-certs-415155 kubelet[3880]: E0919 17:59:35.894880    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 17:59:48 embed-certs-415155 kubelet[3880]: E0919 17:59:48.896006    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 17:59:53 embed-certs-415155 kubelet[3880]: E0919 17:59:53.028428    3880 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 17:59:53 embed-certs-415155 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 17:59:53 embed-certs-415155 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 17:59:53 embed-certs-415155 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 17:59:59 embed-certs-415155 kubelet[3880]: E0919 17:59:59.895031    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:00:11 embed-certs-415155 kubelet[3880]: E0919 18:00:11.895499    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:00:24 embed-certs-415155 kubelet[3880]: E0919 18:00:24.896206    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:00:35 embed-certs-415155 kubelet[3880]: E0919 18:00:35.895732    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:00:47 embed-certs-415155 kubelet[3880]: E0919 18:00:47.895904    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:00:53 embed-certs-415155 kubelet[3880]: E0919 18:00:53.036045    3880 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:00:53 embed-certs-415155 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:00:53 embed-certs-415155 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:00:53 embed-certs-415155 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:01:01 embed-certs-415155 kubelet[3880]: E0919 18:01:01.895665    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:01:16 embed-certs-415155 kubelet[3880]: E0919 18:01:16.895494    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:01:29 embed-certs-415155 kubelet[3880]: E0919 18:01:29.895723    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:01:42 embed-certs-415155 kubelet[3880]: E0919 18:01:42.896537    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:01:53 embed-certs-415155 kubelet[3880]: E0919 18:01:53.028984    3880 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:01:53 embed-certs-415155 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:01:53 embed-certs-415155 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:01:53 embed-certs-415155 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:01:57 embed-certs-415155 kubelet[3880]: E0919 18:01:57.895435    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:02:12 embed-certs-415155 kubelet[3880]: E0919 18:02:12.896703    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	
	* 
	* ==> storage-provisioner [261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639] <==
	* I0919 17:53:10.375641       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 17:53:10.391596       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 17:53:10.391722       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 17:53:10.411937       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 17:53:10.412509       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95579dec-4f0b-4dec-9dc5-d80595c653f2", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-415155_63a03324-0e0c-4655-92bd-8d111ef4375e became leader
	I0919 17:53:10.412571       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-415155_63a03324-0e0c-4655-92bd-8d111ef4375e!
	I0919 17:53:10.513650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-415155_63a03324-0e0c-4655-92bd-8d111ef4375e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-415155 -n embed-certs-415155
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-415155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kdxsz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-415155 describe pod metrics-server-57f55c9bc5-kdxsz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-415155 describe pod metrics-server-57f55c9bc5-kdxsz: exit status 1 (66.770216ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kdxsz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-415155 describe pod metrics-server-57f55c9bc5-kdxsz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (522.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:10:03.64702131 +0000 UTC m=+5736.579005348
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-415555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.121µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-415555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-415555 logs -n 25
E0919 18:10:04.018868   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-415555 logs -n 25: (1.309656887s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo docker                        | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo cat                           | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo                               | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo find                          | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-648984 sudo crio                          | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-648984                                    | kindnet-648984            | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC | 19 Sep 23 18:09 UTC |
	| start   | -p enable-default-cni-648984                         | enable-default-cni-648984 | jenkins | v1.31.2 | 19 Sep 23 18:09 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 18:09:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:09:54.208929   56335 out.go:296] Setting OutFile to fd 1 ...
	I0919 18:09:54.209059   56335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 18:09:54.209069   56335 out.go:309] Setting ErrFile to fd 2...
	I0919 18:09:54.209076   56335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 18:09:54.209342   56335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 18:09:54.210087   56335 out.go:303] Setting JSON to false
	I0919 18:09:54.211483   56335 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6744,"bootTime":1695140250,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:09:54.211564   56335 start.go:138] virtualization: kvm guest
	I0919 18:09:54.214518   56335 out.go:177] * [enable-default-cni-648984] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:09:54.216617   56335 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 18:09:54.216659   56335 notify.go:220] Checking for updates...
	I0919 18:09:54.218293   56335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:09:54.221173   56335 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:09:54.223796   56335 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:09:54.225369   56335 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:09:54.227605   56335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:09:54.229928   56335 config.go:182] Loaded profile config "calico-648984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:09:54.230031   56335 config.go:182] Loaded profile config "custom-flannel-648984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:09:54.230110   56335 config.go:182] Loaded profile config "default-k8s-diff-port-415555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:09:54.230191   56335 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 18:09:54.269472   56335 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 18:09:54.271009   56335 start.go:298] selected driver: kvm2
	I0919 18:09:54.271026   56335 start.go:902] validating driver "kvm2" against <nil>
	I0919 18:09:54.271040   56335 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:09:54.272012   56335 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:09:54.272117   56335 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 18:09:54.287273   56335 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 18:09:54.287324   56335 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	E0919 18:09:54.287574   56335 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0919 18:09:54.287605   56335 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:09:54.287643   56335 cni.go:84] Creating CNI manager for "bridge"
	I0919 18:09:54.287654   56335 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:09:54.287666   56335 start_flags.go:321] config:
	{Name:enable-default-cni-648984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-648984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 18:09:54.287844   56335 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:09:54.290435   56335 out.go:177] * Starting control plane node enable-default-cni-648984 in cluster enable-default-cni-648984
	I0919 18:09:52.893219   54188 out.go:204]   - Generating certificates and keys ...
	I0919 18:09:52.893343   54188 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 18:09:52.893417   54188 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 18:09:53.150000   54188 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:09:53.295058   54188 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:09:53.473211   54188 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:09:53.614534   54188 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 18:09:53.709052   54188 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 18:09:53.709513   54188 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-648984 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I0919 18:09:53.798801   54188 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 18:09:53.799019   54188 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-648984 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I0919 18:09:53.993683   54188 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:09:54.258186   54188 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:09:54.403809   54188 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 18:09:54.403933   54188 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:09:54.548967   54188 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:09:54.670989   54188 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:09:55.151265   54188 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:09:55.462647   54188 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:09:55.463559   54188 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:09:55.466733   54188 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:09:52.193607   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | domain custom-flannel-648984 has defined MAC address 52:54:00:ca:5c:4b in network mk-custom-flannel-648984
	I0919 18:09:52.194127   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | unable to find current IP address of domain custom-flannel-648984 in network mk-custom-flannel-648984
	I0919 18:09:52.194159   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | I0919 18:09:52.194062   55157 retry.go:31] will retry after 2.2123474s: waiting for machine to come up
	I0919 18:09:54.407919   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | domain custom-flannel-648984 has defined MAC address 52:54:00:ca:5c:4b in network mk-custom-flannel-648984
	I0919 18:09:54.408576   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | unable to find current IP address of domain custom-flannel-648984 in network mk-custom-flannel-648984
	I0919 18:09:54.408605   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | I0919 18:09:54.408520   55157 retry.go:31] will retry after 2.398506552s: waiting for machine to come up
	I0919 18:09:54.292462   56335 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 18:09:54.292546   56335 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 18:09:54.292557   56335 cache.go:57] Caching tarball of preloaded images
	I0919 18:09:54.292634   56335 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:09:54.292648   56335 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 18:09:54.292767   56335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/enable-default-cni-648984/config.json ...
	I0919 18:09:54.292788   56335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/enable-default-cni-648984/config.json: {Name:mk22e831bff340f10967ed19a2e2e5a03f4d1216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:09:54.292928   56335 start.go:365] acquiring machines lock for enable-default-cni-648984: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 18:09:55.468595   54188 out.go:204]   - Booting up control plane ...
	I0919 18:09:55.468779   54188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:09:55.468895   54188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:09:55.469919   54188 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:09:55.494332   54188 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:09:55.495623   54188 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:09:55.495717   54188 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 18:09:55.643122   54188 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 18:09:56.810224   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | domain custom-flannel-648984 has defined MAC address 52:54:00:ca:5c:4b in network mk-custom-flannel-648984
	I0919 18:09:56.810894   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | unable to find current IP address of domain custom-flannel-648984 in network mk-custom-flannel-648984
	I0919 18:09:56.810926   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | I0919 18:09:56.810852   55157 retry.go:31] will retry after 3.168263357s: waiting for machine to come up
	I0919 18:09:59.981305   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | domain custom-flannel-648984 has defined MAC address 52:54:00:ca:5c:4b in network mk-custom-flannel-648984
	I0919 18:09:59.981826   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | unable to find current IP address of domain custom-flannel-648984 in network mk-custom-flannel-648984
	I0919 18:09:59.981857   54828 main.go:141] libmachine: (custom-flannel-648984) DBG | I0919 18:09:59.981775   55157 retry.go:31] will retry after 2.876633652s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:47:19 UTC, ends at Tue 2023-09-19 18:10:04 UTC. --
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.386110389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3cbf4596-9a61-4aa6-b011-d97acbad9649 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.388101715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ea6432b8-9655-40ce-80ed-b4dd5ca038b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.388449762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695147004388438307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ea6432b8-9655-40ce-80ed-b4dd5ca038b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.389173049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd3d9ef6-1d29-4b75-b36d-6c3d610e3dbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.389250825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd3d9ef6-1d29-4b75-b36d-6c3d610e3dbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.389446145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd3d9ef6-1d29-4b75-b36d-6c3d610e3dbe name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.438864418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fcdfbef5-0430-4496-9a10-bed8ae2232eb name=/runtime.v1.RuntimeService/Version
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.439063493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fcdfbef5-0430-4496-9a10-bed8ae2232eb name=/runtime.v1.RuntimeService/Version
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.440388589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e63a9884-887f-4212-9c11-9e6f514dadc4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.440797440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695147004440724602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e63a9884-887f-4212-9c11-9e6f514dadc4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.442089474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d36adecc-a459-4e3a-bd5c-96e1e490579b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.442139914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d36adecc-a459-4e3a-bd5c-96e1e490579b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.442334193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d36adecc-a459-4e3a-bd5c-96e1e490579b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.481544080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=76e91fe8-aeb9-418f-9f3d-84962c84a3af name=/runtime.v1.RuntimeService/Version
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.481598863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=76e91fe8-aeb9-418f-9f3d-84962c84a3af name=/runtime.v1.RuntimeService/Version
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.483289979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4021ab3c-1618-4b45-8717-bfaf34b4d1fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.483693351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695147004483680390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4021ab3c-1618-4b45-8717-bfaf34b4d1fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.484447395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=17cb3ff4-8296-4627-8820-21289de38326 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.484497763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=17cb3ff4-8296-4627-8820-21289de38326 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.484705764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=17cb3ff4-8296-4627-8820-21289de38326 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.491461098Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=daa5723c-1673-4857-b1f2-c364992d4afe name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.491737150Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-6fxz5,Uid:75ff79be-8293-4d55-b285-4c6d1d64adf0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145680430619994,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T17:47:52.452099391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1fe5ecac-dc28-400f-9832-186d228038a1,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1695145680427200969,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T17:47:52.452098321Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f7938be2997d21e81db7e276ad486b138b1e96367d29e1f366e1c75bf8eb5dda,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-vq4p7,Uid:be4949af-dd94-45ea-bb7d-2fe124ecd2a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145677516447988,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-vq4p7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be4949af-dd94-45ea-bb7d-2fe124ecd2a5,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19
T17:47:52.452096113Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&PodSandboxMetadata{Name:kube-proxy-5cghw,Uid:2f9f0da5-e5e6-40e4-bb49-16749597ac07,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145672804731224,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f9f0da5-e5e6-40e4-bb49-16749597ac07,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-19T17:47:52.452093677Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d11f41ac-f846-46de-a517-dab454f05033,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145672782534606,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-09-19T17:47:52.452097291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-415555,Uid:2767fda935b445f8213befc2eed16db1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145665985941354,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2767fda935b445f8213befc2eed16db1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.228:8444,kubernetes.io/config.hash: 2767fda935b445f8213befc2eed16db1,kubernetes.io/config.seen: 2023-09-19T17:47:45.435887938Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c390
9b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-415555,Uid:67f8ba03cffad6e4d002be3dbed01bbd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145665981042395,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 67f8ba03cffad6e4d002be3dbed01bbd,kubernetes.io/config.seen: 2023-09-19T17:47:45.435888828Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-415555,Uid:3a1055bc1abe15d70dabf15bc60452bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145665963070350,Labels:map[string]string{component: kube-s
cheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a1055bc1abe15d70dabf15bc60452bc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3a1055bc1abe15d70dabf15bc60452bc,kubernetes.io/config.seen: 2023-09-19T17:47:45.435883042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-415555,Uid:68de7ac8f150b3cf13057f5fdf78f67b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1695145665919343255,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68de7ac8f150b3cf13057f5fdf78f67b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.61.228:2379,kubernetes.io/config.hash: 68de7ac8f150b3cf13057f5fdf78f67b,kubernetes.io/config.seen: 2023-09-19T17:47:45.435886926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=daa5723c-1673-4857-b1f2-c364992d4afe name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.492584694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=987bef5c-8216-499e-8848-f48a6446261d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.492630634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=987bef5c-8216-499e-8848-f48a6446261d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:10:04 default-k8s-diff-port-415555 crio[725]: time="2023-09-19 18:10:04.492800564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145704710627365,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c82e4737fd63bdaff2180553ad757d0219a36e7e1da37e47bedfba9ba440687,PodSandboxId:fb8b0fd71c3c434c806072d67372354cd9f6627a3ba236d578478b6a4e1397e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1695145683935078834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1fe5ecac-dc28-400f-9832-186d228038a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5dc3ec37,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82,PodSandboxId:52fd365d383f42f849b04df9996a7b136b23765cf1056461101a3fb4dfa19795,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145681213820666,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6fxz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ff79be-8293-4d55-b285-4c6d1d64adf0,},Annotations:map[string]string{io.kubernetes.container.hash: 29cbce72,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201,PodSandboxId:4f015308d1a02d70eb3fadfbefc08c459a60cc8c72e933c00e3692d581325d86,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145673689304961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cghw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
2f9f0da5-e5e6-40e4-bb49-16749597ac07,},Annotations:map[string]string{io.kubernetes.container.hash: 9a44f644,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6,PodSandboxId:df3f3e0c9250606ac01dc8d2a0eb1a335efb2ae3c6f46365443c489dba5ccc46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1695145673420437976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
11f41ac-f846-46de-a517-dab454f05033,},Annotations:map[string]string{io.kubernetes.container.hash: 4e5fdf5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c,PodSandboxId:85199eeddffc5c92dcb451c8e4804aa31bf4c67116ee5a5bc280a7c0359713d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145667299630176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 3a1055bc1abe15d70dabf15bc60452bc,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6,PodSandboxId:5db4458e45c836aa760cb69f8d33aab729f84b6b060fc1f7faf841f161c3909b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145666950319008,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-415555,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 67f8ba03cffad6e4d002be3dbed01bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2,PodSandboxId:94cada6b81dbb904fca2eec16c21cef4160abe1a72aa173ba4157044d97fda63,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145666733961516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
8de7ac8f150b3cf13057f5fdf78f67b,},Annotations:map[string]string{io.kubernetes.container.hash: d53c868,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b,PodSandboxId:282bac4b1298549caf1cd5fa998b2a5427c0521b887b7061843c4a96ee5cbf38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145666570358234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-415555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
67fda935b445f8213befc2eed16db1,},Annotations:map[string]string{io.kubernetes.container.hash: a65da78a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=987bef5c-8216-499e-8848-f48a6446261d name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c7e1ede777c67       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   df3f3e0c92506       storage-provisioner
	5c82e4737fd63       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   fb8b0fd71c3c4       busybox
	6165f78e9f3be       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   52fd365d383f4       coredns-5dd5756b68-6fxz5
	52ed624ea25f5       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      22 minutes ago      Running             kube-proxy                1                   4f015308d1a02       kube-proxy-5cghw
	9055f7f0e2b85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   df3f3e0c92506       storage-provisioner
	23740abdea376       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      22 minutes ago      Running             kube-scheduler            1                   85199eeddffc5       kube-scheduler-default-k8s-diff-port-415555
	3ead0fadb5c30       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      22 minutes ago      Running             kube-controller-manager   1                   5db4458e45c83       kube-controller-manager-default-k8s-diff-port-415555
	837d6df2a022c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   94cada6b81dbb       etcd-default-k8s-diff-port-415555
	54b31f09971f1       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      22 minutes ago      Running             kube-apiserver            1                   282bac4b12985       kube-apiserver-default-k8s-diff-port-415555
	
	* 
	* ==> coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58811 - 63633 "HINFO IN 6878534768593487844.6714548147103407529. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016870824s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-415555
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-415555
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=default-k8s-diff-port-415555
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_40_51_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:40:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-415555
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 18:10:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 18:08:49 +0000   Tue, 19 Sep 2023 17:40:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 18:08:49 +0000   Tue, 19 Sep 2023 17:40:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 18:08:49 +0000   Tue, 19 Sep 2023 17:40:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 18:08:49 +0000   Tue, 19 Sep 2023 17:48:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.228
	  Hostname:    default-k8s-diff-port-415555
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 99e6d1633d0a4bbbbcd368c587e05c2e
	  System UUID:                99e6d163-3d0a-4bbb-bcd3-68c587e05c2e
	  Boot ID:                    ce7fb3ba-3d90-469a-92f4-eb71fae2ed96
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-6fxz5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-415555                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-415555             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-415555    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-5cghw                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-415555             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-vq4p7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-415555 event: Registered Node default-k8s-diff-port-415555 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-415555 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-415555 event: Registered Node default-k8s-diff-port-415555 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.400516] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.365776] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149472] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.693761] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.488880] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.128331] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.165634] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.123149] systemd-fstab-generator[685]: Ignoring "noauto" for root device
	[  +0.233529] systemd-fstab-generator[709]: Ignoring "noauto" for root device
	[ +17.451828] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +15.366472] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] <==
	* {"level":"warn","ts":"2023-09-19T18:07:34.29322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.874876ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17311989449926722224 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.228\" mod_revision:1529 > success:<request_put:<key:\"/registry/masterleases/192.168.61.228\" value_size:67 lease:8088617413071946414 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.228\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-19T18:07:34.293828Z","caller":"traceutil/trace.go:171","msg":"trace[263004350] linearizableReadLoop","detail":"{readStateIndex:1819; appliedIndex:1818; }","duration":"387.905981ms","start":"2023-09-19T18:07:33.905883Z","end":"2023-09-19T18:07:34.293789Z","steps":["trace[263004350] 'read index received'  (duration: 137.019337ms)","trace[263004350] 'applied index is now lower than readState.Index'  (duration: 250.885212ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-19T18:07:34.293883Z","caller":"traceutil/trace.go:171","msg":"trace[1913627021] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"458.625791ms","start":"2023-09-19T18:07:33.835232Z","end":"2023-09-19T18:07:34.293857Z","steps":["trace[1913627021] 'process raft request'  (duration: 207.732228ms)","trace[1913627021] 'compare'  (duration: 249.708628ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-19T18:07:34.294089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T18:07:33.835215Z","time spent":"458.74795ms","remote":"127.0.0.1:42400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.228\" mod_revision:1529 > success:<request_put:<key:\"/registry/masterleases/192.168.61.228\" value_size:67 lease:8088617413071946414 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.228\" > >"}
	{"level":"warn","ts":"2023-09-19T18:07:34.294244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"388.369541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-19T18:07:34.294287Z","caller":"traceutil/trace.go:171","msg":"trace[178305160] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1537; }","duration":"388.42326ms","start":"2023-09-19T18:07:33.905858Z","end":"2023-09-19T18:07:34.294281Z","steps":["trace[178305160] 'agreement among raft nodes before linearized reading'  (duration: 388.076366ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T18:07:34.294325Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T18:07:33.905843Z","time spent":"388.476262ms","remote":"127.0.0.1:42488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":13,"response size":31,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true "}
	{"level":"warn","ts":"2023-09-19T18:07:34.294369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.730833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T18:07:34.294434Z","caller":"traceutil/trace.go:171","msg":"trace[709386274] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1537; }","duration":"227.793153ms","start":"2023-09-19T18:07:34.06663Z","end":"2023-09-19T18:07:34.294423Z","steps":["trace[709386274] 'agreement among raft nodes before linearized reading'  (duration: 227.713776ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:07:50.409378Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1308}
	{"level":"info","ts":"2023-09-19T18:07:50.411532Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1308,"took":"1.833088ms","hash":2274396157}
	{"level":"info","ts":"2023-09-19T18:07:50.411635Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2274396157,"revision":1308,"compact-revision":1065}
	{"level":"info","ts":"2023-09-19T18:08:39.60465Z","caller":"traceutil/trace.go:171","msg":"trace[1700132759] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"132.250633ms","start":"2023-09-19T18:08:39.472345Z","end":"2023-09-19T18:08:39.604596Z","steps":["trace[1700132759] 'process raft request'  (duration: 131.830858ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:08:41.988296Z","caller":"traceutil/trace.go:171","msg":"trace[102059966] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"370.004134ms","start":"2023-09-19T18:08:41.618272Z","end":"2023-09-19T18:08:41.988277Z","steps":["trace[102059966] 'process raft request'  (duration: 369.852936ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T18:08:41.988928Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T18:08:41.618255Z","time spent":"370.501778ms","remote":"127.0.0.1:42430","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1591 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-09-19T18:08:42.259187Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.509143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T18:08:42.259267Z","caller":"traceutil/trace.go:171","msg":"trace[1746702306] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1593; }","duration":"191.626119ms","start":"2023-09-19T18:08:42.067628Z","end":"2023-09-19T18:08:42.259254Z","steps":["trace[1746702306] 'range keys from in-memory index tree'  (duration: 191.443401ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T18:08:42.259194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.292625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-19T18:08:42.259385Z","caller":"traceutil/trace.go:171","msg":"trace[287611684] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1593; }","duration":"146.503131ms","start":"2023-09-19T18:08:42.112877Z","end":"2023-09-19T18:08:42.25938Z","steps":["trace[287611684] 'count revisions from in-memory index tree'  (duration: 146.203559ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:09:01.762351Z","caller":"traceutil/trace.go:171","msg":"trace[330333037] transaction","detail":"{read_only:false; response_revision:1609; number_of_response:1; }","duration":"127.893741ms","start":"2023-09-19T18:09:01.634384Z","end":"2023-09-19T18:09:01.762278Z","steps":["trace[330333037] 'process raft request'  (duration: 127.312613ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:09:02.519552Z","caller":"traceutil/trace.go:171","msg":"trace[1766828080] transaction","detail":"{read_only:false; response_revision:1610; number_of_response:1; }","duration":"387.756857ms","start":"2023-09-19T18:09:02.131775Z","end":"2023-09-19T18:09:02.519532Z","steps":["trace[1766828080] 'process raft request'  (duration: 387.318188ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T18:09:02.519966Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T18:09:02.131757Z","time spent":"388.158352ms","remote":"127.0.0.1:42430","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1608 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-09-19T18:09:50.55071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.576588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T18:09:50.552609Z","caller":"traceutil/trace.go:171","msg":"trace[769739432] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1649; }","duration":"118.434039ms","start":"2023-09-19T18:09:50.434059Z","end":"2023-09-19T18:09:50.552493Z","steps":["trace[769739432] 'range keys from in-memory index tree'  (duration: 116.44993ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:09:51.113799Z","caller":"traceutil/trace.go:171","msg":"trace[195912826] transaction","detail":"{read_only:false; response_revision:1650; number_of_response:1; }","duration":"206.306417ms","start":"2023-09-19T18:09:50.907478Z","end":"2023-09-19T18:09:51.113785Z","steps":["trace[195912826] 'process raft request'  (duration: 205.913279ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:10:04 up 22 min,  0 users,  load average: 0.35, 0.32, 0.19
	Linux default-k8s-diff-port-415555 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] <==
	* I0919 18:07:34.294918       1 trace.go:236] Trace[369960516]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.228,type:*v1.Endpoints,resource:apiServerIPInfo (19-Sep-2023 18:07:33.712) (total time: 582ms):
	Trace[369960516]: ---"Transaction prepared" 120ms (18:07:33.834)
	Trace[369960516]: ---"Txn call completed" 460ms (18:07:34.294)
	Trace[369960516]: [582.330177ms] [582.330177ms] END
	I0919 18:07:52.209852       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:07:52.378968       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:52.379257       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:07:52.379779       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:07:53.380530       1 handler_proxy.go:93] no RequestInfo found in the context
	W0919 18:07:53.380583       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:53.380769       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:07:53.380836       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 18:07:53.380781       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:07:53.382281       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:08:52.209647       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:08:53.381581       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:08:53.381755       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:08:53.381783       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:08:53.383207       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:08:53.383279       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:08:53.383290       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:09:52.210258       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] <==
	* I0919 18:04:14.500107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="237.819µs"
	E0919 18:04:35.360960       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:04:35.923675       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:05:05.368868       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:05:05.932930       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:05:35.375677       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:05:35.946248       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:06:05.382235       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:06:05.958261       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:06:35.389127       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:06:35.968535       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:07:05.396113       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:07:05.979235       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:07:35.402055       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:07:35.991961       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:08:05.408740       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:08:06.002302       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:08:35.416413       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:08:36.016092       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:09:05.423924       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:09:05.515706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="253.837µs"
	I0919 18:09:06.024390       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 18:09:19.506782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.944µs"
	E0919 18:09:35.429855       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:09:36.033834       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] <==
	* I0919 17:47:53.900145       1 server_others.go:69] "Using iptables proxy"
	I0919 17:47:53.916483       1 node.go:141] Successfully retrieved node IP: 192.168.61.228
	I0919 17:47:53.984386       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:47:53.984480       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:47:54.001843       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:47:54.002378       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:47:54.003485       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:47:54.004708       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:47:54.007325       1 config.go:188] "Starting service config controller"
	I0919 17:47:54.007390       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:47:54.007436       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:47:54.007459       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:47:54.008478       1 config.go:315] "Starting node config controller"
	I0919 17:47:54.009170       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:47:54.108250       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:47:54.108237       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:47:54.109694       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] <==
	* I0919 17:47:49.767329       1 serving.go:348] Generated self-signed cert in-memory
	W0919 17:47:52.319207       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 17:47:52.319370       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:47:52.319479       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 17:47:52.319509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 17:47:52.409675       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I0919 17:47:52.409779       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:47:52.418361       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 17:47:52.418732       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 17:47:52.429393       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 17:47:52.432070       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 17:47:52.523798       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:47:19 UTC, ends at Tue 2023-09-19 18:10:05 UTC. --
	Sep 19 18:07:45 default-k8s-diff-port-415555 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:07:45 default-k8s-diff-port-415555 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:07:45 default-k8s-diff-port-415555 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:07:53 default-k8s-diff-port-415555 kubelet[932]: E0919 18:07:53.484717     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:08:04 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:04.484717     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:08:17 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:17.485427     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:08:29 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:29.486352     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:08:42 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:42.484654     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:08:45 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:45.499819     932 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:08:45 default-k8s-diff-port-415555 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:08:45 default-k8s-diff-port-415555 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:08:45 default-k8s-diff-port-415555 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:08:54 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:54.508218     932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 18:08:54 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:54.508273     932 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 18:08:54 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:54.508472     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hmfmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-vq4p7_kube-system(be4949af-dd94-45ea-bb7d-2fe124ecd2a5): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:08:54 default-k8s-diff-port-415555 kubelet[932]: E0919 18:08:54.508512     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:09:05 default-k8s-diff-port-415555 kubelet[932]: E0919 18:09:05.485471     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:09:19 default-k8s-diff-port-415555 kubelet[932]: E0919 18:09:19.485564     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:09:30 default-k8s-diff-port-415555 kubelet[932]: E0919 18:09:30.484613     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:09:43 default-k8s-diff-port-415555 kubelet[932]: E0919 18:09:43.485955     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	Sep 19 18:09:45 default-k8s-diff-port-415555 kubelet[932]: E0919 18:09:45.521133     932 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:09:45 default-k8s-diff-port-415555 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:09:45 default-k8s-diff-port-415555 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:09:45 default-k8s-diff-port-415555 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:09:58 default-k8s-diff-port-415555 kubelet[932]: E0919 18:09:58.484260     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vq4p7" podUID="be4949af-dd94-45ea-bb7d-2fe124ecd2a5"
	
	* 
	* ==> storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] <==
	* I0919 17:47:53.653660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 17:48:23.669872       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] <==
	* I0919 17:48:24.833529       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 17:48:24.852143       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 17:48:24.852230       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 17:48:42.259190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de8a1b71-0678-4f8a-80b6-13fe53c9d27a", APIVersion:"v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-415555_d24e9b48-8ae7-4ef8-a7c5-bcf71a3f09c6 became leader
	I0919 17:48:42.259679       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 17:48:42.259880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-415555_d24e9b48-8ae7-4ef8-a7c5-bcf71a3f09c6!
	I0919 17:48:42.361227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-415555_d24e9b48-8ae7-4ef8-a7c5-bcf71a3f09c6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vq4p7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 describe pod metrics-server-57f55c9bc5-vq4p7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-415555 describe pod metrics-server-57f55c9bc5-vq4p7: exit status 1 (67.998542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vq4p7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-415555 describe pod metrics-server-57f55c9bc5-vq4p7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (522.63s)
E0919 18:11:44.401387   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:46.962526   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:50.831267   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:52.083024   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (327.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-100627 -n old-k8s-version-100627
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:06:57.451272934 +0000 UTC m=+5550.383256975
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-100627 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-100627 logs -n 25: (1.274000784s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-512928 -- sudo                         | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-512928                                 | cert-options-512928          | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-367630                            | force-systemd-env-367630     | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC | 19 Sep 23 17:52 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100627        | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC | 19 Sep 23 17:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100627             | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC | 19 Sep 23 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:49:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:49:25.690379   47798 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:49:25.690666   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690680   47798 out.go:309] Setting ErrFile to fd 2...
	I0919 17:49:25.690688   47798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:49:25.690866   47798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:49:25.691435   47798 out.go:303] Setting JSON to false
	I0919 17:49:25.692368   47798 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5516,"bootTime":1695140250,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:49:25.692468   47798 start.go:138] virtualization: kvm guest
	I0919 17:49:25.694628   47798 out.go:177] * [old-k8s-version-100627] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:49:25.696349   47798 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:49:25.696345   47798 notify.go:220] Checking for updates...
	I0919 17:49:25.697700   47798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:49:25.699081   47798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:49:25.700392   47798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:49:25.701684   47798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:49:25.704016   47798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:49:25.705911   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:49:25.706464   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.706525   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.722505   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0919 17:49:25.722936   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.723454   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.723479   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.723851   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.724042   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.726028   47798 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0919 17:49:25.727479   47798 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:49:25.727787   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.727829   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.743272   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0919 17:49:25.743700   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.744180   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.744206   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.744589   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.744775   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.781696   47798 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:49:25.783056   47798 start.go:298] selected driver: kvm2
	I0919 17:49:25.783069   47798 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.783172   47798 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:49:25.783797   47798 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.783868   47798 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:49:25.797796   47798 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:49:25.798190   47798 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:49:25.798229   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:49:25.798239   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:49:25.798254   47798 start_flags.go:321] config:
	{Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:49:25.798391   47798 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:49:25.800110   47798 out.go:177] * Starting control plane node old-k8s-version-100627 in cluster old-k8s-version-100627
	I0919 17:49:25.801393   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:49:25.801433   47798 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 17:49:25.801447   47798 cache.go:57] Caching tarball of preloaded images
	I0919 17:49:25.801545   47798 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 17:49:25.801559   47798 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0919 17:49:25.801689   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:49:25.801924   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:49:25.801971   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 26.483µs
	I0919 17:49:25.801985   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:49:25.801989   47798 fix.go:54] fixHost starting: 
	I0919 17:49:25.802270   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:49:25.802300   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:49:25.816968   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0919 17:49:25.817484   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:49:25.818034   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:49:25.818069   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:49:25.818376   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:49:25.818564   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.818799   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:49:25.820610   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Running err=<nil>
	W0919 17:49:25.820646   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:49:25.822656   47798 out.go:177] * Updating the running kvm2 "old-k8s-version-100627" VM ...
	I0919 17:49:25.475965   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:27.476794   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:24.179260   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.686283   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:26.993419   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:28.995394   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:25.824024   47798 machine.go:88] provisioning docker machine ...
	I0919 17:49:25.824053   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:49:25.824279   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824480   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:49:25.824508   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:49:25.824671   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:49:25.827416   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.827890   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:37:50 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:49:25.827920   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:49:25.828092   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:49:25.828287   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828490   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:49:25.828642   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:49:25.828819   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:49:25.829172   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:49:25.829188   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:49:28.724736   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:29.976563   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.976829   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:29.180775   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.677584   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.678666   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.493348   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:33.495016   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:31.796651   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:33.977341   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.477521   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:36.178183   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:38.679802   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:35.495920   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.993770   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:39.994165   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:37.876662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:38.477642   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.977376   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:41.177699   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:43.178895   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:42.494311   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:44.494974   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:40.948690   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:43.476725   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.477936   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.977074   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:45.678443   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:48.178687   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:46.994529   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:49.494895   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:47.028682   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.100607   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:50.476569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.478246   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:50.179250   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:52.180827   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:51.994091   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.494911   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.480792   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.978326   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:54.678236   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.678493   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.678539   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:56.496729   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:58.993989   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:49:59.224657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:49:59.476603   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.477023   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:00.678913   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.178281   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:01.494409   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:03.993808   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:02.292662   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:03.477796   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.976205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.180836   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.678312   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:05.994188   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:07.999270   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:08.372675   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:08.476522   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.976260   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:09.679568   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.179377   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:10.494291   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:12.995682   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:11.444679   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:13.476906   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.478193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.976583   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:14.679325   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:16.690040   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:15.496998   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.993599   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.993922   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:17.524614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.596688   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:20.476110   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.477330   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:19.184902   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:21.678830   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:23.679261   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:22.494626   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.993912   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:24.976379   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.976627   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.177309   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:28.179300   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:27.494133   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:29.494473   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:26.676677   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:29.748706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:28.976722   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.980716   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:30.678715   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.177789   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:31.993563   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.995728   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:33.476205   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.975739   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.978115   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.178188   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:37.178328   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:36.493541   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:38.494380   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:35.832612   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:38.900652   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:40.476580   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.476989   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:39.180279   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:41.678338   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:43.678611   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:40.993785   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:42.994446   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:44.980626   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:44.976641   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.977032   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:46.178379   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.179405   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:45.494929   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:47.993704   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:49.995192   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:48.052702   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:48.977244   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:51.477325   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:50.678663   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:53.178707   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:52.493646   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.494478   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:54.132706   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:50:53.477737   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.977429   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.978145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:55.678855   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.177724   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:56.993145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:58.994370   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:50:57.208643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:00.476193   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.476286   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:00.178398   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:02.677951   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:01.501993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.993491   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:03.288721   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:04.476795   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.976387   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.177376   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:07.178224   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:05.995006   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:08.494405   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:06.360657   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:08.977404   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.475407   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:09.178322   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:11.179143   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:13.180235   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:10.494521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.993993   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:12.436681   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:15.508678   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:13.975736   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.977800   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.679181   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.177065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:15.494642   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:17.494846   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:19.993481   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:18.475821   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.476773   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.976145   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:20.178065   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:22.178249   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.993613   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:23.994655   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:21.588622   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.660703   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:24.976569   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:27.476021   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:24.678762   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.682314   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:26.493981   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:28.494262   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.477183   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.976125   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:29.178390   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:31.178551   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:33.678277   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.495041   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:32.993120   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:30.740717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.816640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:33.977079   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.475678   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:36.179024   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:38.678508   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:35.495368   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:37.994521   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:39.892631   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:38.476601   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.978279   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:41.178365   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:43.678896   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:40.493826   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.992893   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:44.993574   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:42.968646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:43.478156   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:45.976257   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.977272   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:46.178127   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:48.178192   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:47.494860   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.993714   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:49.044674   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:50.476391   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.976686   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:50.678434   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:53.177908   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:51.995140   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:54.494996   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:52.116699   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:54.977835   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.475875   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:55.178219   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:57.179598   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:56.992881   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.994100   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:58.200619   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:51:59.476340   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.975559   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:51:59.678336   45961 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:00.158668   45961 pod_ready.go:81] duration metric: took 4m0.000408372s waiting for pod "metrics-server-57f55c9bc5-9clpv" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:00.158710   45961 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:00.158733   45961 pod_ready.go:38] duration metric: took 4m12.69690087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:00.158768   45961 kubeadm.go:640] restartCluster took 4m32.67884897s
	W0919 17:52:00.158862   45961 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:00.158899   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:00.995208   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:03.493604   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:01.272609   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:03.976776   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:06.478653   46282 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:05.495181   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.995025   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:07.348614   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:10.424641   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:08.170853   46282 pod_ready.go:81] duration metric: took 4m0.00010513s waiting for pod "metrics-server-57f55c9bc5-vq4p7" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:08.170890   46282 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:08.170903   46282 pod_ready.go:38] duration metric: took 4m5.202195097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:08.170929   46282 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:08.170960   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:08.171010   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:08.229465   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.229484   46282 cri.go:89] found id: ""
	I0919 17:52:08.229491   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:08.229537   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.234379   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:08.234434   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:08.280999   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:08.281033   46282 cri.go:89] found id: ""
	I0919 17:52:08.281044   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:08.281097   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.285499   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:08.285561   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:08.327387   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.327413   46282 cri.go:89] found id: ""
	I0919 17:52:08.327423   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:08.327481   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.333158   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:08.333235   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:08.375921   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.375946   46282 cri.go:89] found id: ""
	I0919 17:52:08.375955   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:08.376008   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.380156   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:08.380220   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:08.425586   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:08.425613   46282 cri.go:89] found id: ""
	I0919 17:52:08.425620   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:08.425676   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.430229   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:08.430302   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:08.482920   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:08.482946   46282 cri.go:89] found id: ""
	I0919 17:52:08.482956   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:08.483017   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.488497   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:08.488559   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:08.543405   46282 cri.go:89] found id: ""
	I0919 17:52:08.543432   46282 logs.go:284] 0 containers: []
	W0919 17:52:08.543441   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:08.543449   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:08.543510   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:08.588287   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:08.588309   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:08.588314   46282 cri.go:89] found id: ""
	I0919 17:52:08.588326   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:08.588390   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.592986   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:08.597223   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:08.597245   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:08.648372   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:08.648400   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:08.705158   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:08.705203   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:08.754475   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:08.754511   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:08.797571   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:08.797603   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:08.950578   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:08.950617   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:08.998529   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:08.998555   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:09.039415   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:09.039445   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:09.081622   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:09.081657   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:09.095239   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:09.095269   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:09.141402   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:09.141429   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:09.186918   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:09.186953   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:09.244473   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:09.244508   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:12.216337   46282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:12.232741   46282 api_server.go:72] duration metric: took 4m15.890515742s to wait for apiserver process to appear ...
	I0919 17:52:12.232764   46282 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:12.232793   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:12.232844   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:12.279741   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:12.279769   46282 cri.go:89] found id: ""
	I0919 17:52:12.279780   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:12.279836   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.284490   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:12.284560   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:12.322547   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:12.322575   46282 cri.go:89] found id: ""
	I0919 17:52:12.322585   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:12.322648   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.326924   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:12.326981   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:12.376181   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:12.376201   46282 cri.go:89] found id: ""
	I0919 17:52:12.376208   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:12.376259   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.380831   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:12.380892   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:12.422001   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.422035   46282 cri.go:89] found id: ""
	I0919 17:52:12.422045   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:12.422112   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.426372   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:12.426456   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:12.474718   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:12.474739   46282 cri.go:89] found id: ""
	I0919 17:52:12.474749   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:12.474804   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.479781   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:12.479837   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:12.525008   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:12.525038   46282 cri.go:89] found id: ""
	I0919 17:52:12.525047   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:12.525106   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.529414   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:12.529480   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:12.573369   46282 cri.go:89] found id: ""
	I0919 17:52:12.573395   46282 logs.go:284] 0 containers: []
	W0919 17:52:12.573403   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:12.573410   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:12.573461   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:12.618041   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:12.618063   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:12.618067   46282 cri.go:89] found id: ""
	I0919 17:52:12.618074   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:12.618118   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.622248   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:12.626519   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:12.626537   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:12.667023   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:12.667052   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:13.123963   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:13.123996   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:10.495145   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:12.994448   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:13.243498   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:13.243533   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:13.289172   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:13.289208   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:13.325853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:13.325883   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:13.363915   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:13.363943   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.412359   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:13.412394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:13.458675   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:13.458706   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:13.473516   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:13.473549   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:13.538694   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:13.538723   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:13.606826   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:13.606871   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:13.652363   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:13.652394   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.204482   46282 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8444/healthz ...
	I0919 17:52:16.210733   46282 api_server.go:279] https://192.168.61.228:8444/healthz returned 200:
	ok
	I0919 17:52:16.212054   46282 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:16.212076   46282 api_server.go:131] duration metric: took 3.979306376s to wait for apiserver health ...
	I0919 17:52:16.212085   46282 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:16.212106   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 17:52:16.212148   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 17:52:16.263882   46282 cri.go:89] found id: "54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:16.263908   46282 cri.go:89] found id: ""
	I0919 17:52:16.263918   46282 logs.go:284] 1 containers: [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b]
	I0919 17:52:16.263978   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.268238   46282 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 17:52:16.268291   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 17:52:16.309480   46282 cri.go:89] found id: "837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.309504   46282 cri.go:89] found id: ""
	I0919 17:52:16.309511   46282 logs.go:284] 1 containers: [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2]
	I0919 17:52:16.309560   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.313860   46282 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 17:52:16.313910   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 17:52:16.353715   46282 cri.go:89] found id: "6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:16.353741   46282 cri.go:89] found id: ""
	I0919 17:52:16.353751   46282 logs.go:284] 1 containers: [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82]
	I0919 17:52:16.353812   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.358128   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 17:52:16.358194   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 17:52:16.398792   46282 cri.go:89] found id: "23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.398811   46282 cri.go:89] found id: ""
	I0919 17:52:16.398818   46282 logs.go:284] 1 containers: [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c]
	I0919 17:52:16.398865   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.403410   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 17:52:16.403463   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 17:52:16.449884   46282 cri.go:89] found id: "52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.449910   46282 cri.go:89] found id: ""
	I0919 17:52:16.449924   46282 logs.go:284] 1 containers: [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201]
	I0919 17:52:16.449966   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.454404   46282 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 17:52:16.454462   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 17:52:16.500246   46282 cri.go:89] found id: "3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:16.500265   46282 cri.go:89] found id: ""
	I0919 17:52:16.500274   46282 logs.go:284] 1 containers: [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6]
	I0919 17:52:16.500328   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.504468   46282 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 17:52:16.504531   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 17:52:16.545865   46282 cri.go:89] found id: ""
	I0919 17:52:16.545888   46282 logs.go:284] 0 containers: []
	W0919 17:52:16.545895   46282 logs.go:286] No container was found matching "kindnet"
	I0919 17:52:16.545900   46282 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 17:52:16.545953   46282 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 17:52:16.584533   46282 cri.go:89] found id: "c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.584560   46282 cri.go:89] found id: "9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.584565   46282 cri.go:89] found id: ""
	I0919 17:52:16.584571   46282 logs.go:284] 2 containers: [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6]
	I0919 17:52:16.584619   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.588723   46282 ssh_runner.go:195] Run: which crictl
	I0919 17:52:16.592429   46282 logs.go:123] Gathering logs for kubelet ...
	I0919 17:52:16.592459   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 17:52:16.643853   46282 logs.go:123] Gathering logs for kube-proxy [52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201] ...
	I0919 17:52:16.643884   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52ed624ea25f593ab317d5ad73ddbab8c01ff12b8a6759a631ca0283aab5b201"
	I0919 17:52:16.693660   46282 logs.go:123] Gathering logs for dmesg ...
	I0919 17:52:16.693697   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 17:52:16.710833   46282 logs.go:123] Gathering logs for etcd [837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2] ...
	I0919 17:52:16.710860   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837d6df2a022ccb57630cdf740b15fa2b9da05b673ddbb91ffb3934ba3d132f2"
	I0919 17:52:16.769518   46282 logs.go:123] Gathering logs for kube-scheduler [23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c] ...
	I0919 17:52:16.769548   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23740abdea3769a92ffce4e65264b1aeb7fdd699666afc96e407c719bcaa9a1c"
	I0919 17:52:16.819614   46282 logs.go:123] Gathering logs for storage-provisioner [9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6] ...
	I0919 17:52:16.819645   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9055f7f0e2b857b2eadac029acb6d09d8e69af0db6e525ab6cb2244e8f913cc6"
	I0919 17:52:16.860112   46282 logs.go:123] Gathering logs for container status ...
	I0919 17:52:16.860154   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 17:52:16.918657   46282 logs.go:123] Gathering logs for storage-provisioner [c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4] ...
	I0919 17:52:16.918687   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7e1ede777c672a06b26e78e1dced685d3d9a53353c86853f67e6f9386802cb4"
	I0919 17:52:16.962381   46282 logs.go:123] Gathering logs for CRI-O ...
	I0919 17:52:16.962412   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 17:52:17.304580   46282 logs.go:123] Gathering logs for describe nodes ...
	I0919 17:52:17.304618   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 17:52:17.449337   46282 logs.go:123] Gathering logs for kube-apiserver [54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b] ...
	I0919 17:52:17.449368   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54b31f09971f110188a2e5c8100566ccb36ed92d22c3f7ef2333f250070fc53b"
	I0919 17:52:17.522234   46282 logs.go:123] Gathering logs for coredns [6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82] ...
	I0919 17:52:17.522268   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6165f78e9f3be596550110342f4f70da9308db9d9de814250a20d7a61797bd82"
	I0919 17:52:17.581061   46282 logs.go:123] Gathering logs for kube-controller-manager [3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6] ...
	I0919 17:52:17.581093   46282 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ead0fadb5c309830021781b083dbe8ec0962c4496f7ff037c70aff3e95dfbf6"
	I0919 17:52:13.986517   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.82758933s)
	I0919 17:52:13.986593   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:14.002396   45961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:14.012005   45961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:14.020952   45961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:14.021075   45961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:14.249350   45961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:20.161795   46282 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:20.161825   46282 system_pods.go:61] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.161833   46282 system_pods.go:61] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.161840   46282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.161845   46282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.161850   46282 system_pods.go:61] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.161856   46282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.161866   46282 system_pods.go:61] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.161876   46282 system_pods.go:61] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.161885   46282 system_pods.go:74] duration metric: took 3.949793054s to wait for pod list to return data ...
	I0919 17:52:20.161895   46282 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:20.165017   46282 default_sa.go:45] found service account: "default"
	I0919 17:52:20.165041   46282 default_sa.go:55] duration metric: took 3.138746ms for default service account to be created ...
	I0919 17:52:20.165051   46282 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:20.171771   46282 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:20.171798   46282 system_pods.go:89] "coredns-5dd5756b68-6fxz5" [75ff79be-8293-4d55-b285-4c6d1d64adf0] Running
	I0919 17:52:20.171807   46282 system_pods.go:89] "etcd-default-k8s-diff-port-415555" [673a7ad9-f811-426c-aef6-7a36f2ddcacf] Running
	I0919 17:52:20.171815   46282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-415555" [a28bd2e3-0c0b-415e-986f-e92052c9eb0d] Running
	I0919 17:52:20.171823   46282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-415555" [b3b0d3ac-4f9f-4127-902e-4ea4c49100ba] Running
	I0919 17:52:20.171841   46282 system_pods.go:89] "kube-proxy-5cghw" [2f9f0da5-e5e6-40e4-bb49-16749597ac07] Running
	I0919 17:52:20.171847   46282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-415555" [60410add-cbf3-403a-9a84-f8566f804757] Running
	I0919 17:52:20.171858   46282 system_pods.go:89] "metrics-server-57f55c9bc5-vq4p7" [be4949af-dd94-45ea-bb7d-2fe124ecd2a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:20.171867   46282 system_pods.go:89] "storage-provisioner" [d11f41ac-f846-46de-a517-dab454f05033] Running
	I0919 17:52:20.171879   46282 system_pods.go:126] duration metric: took 6.820805ms to wait for k8s-apps to be running ...
	I0919 17:52:20.171891   46282 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:20.171944   46282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:20.191948   46282 system_svc.go:56] duration metric: took 20.046863ms WaitForService to wait for kubelet.
	I0919 17:52:20.191977   46282 kubeadm.go:581] duration metric: took 4m23.849755591s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:20.192003   46282 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:20.198066   46282 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:20.198090   46282 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:20.198101   46282 node_conditions.go:105] duration metric: took 6.093464ms to run NodePressure ...
	I0919 17:52:20.198113   46282 start.go:228] waiting for startup goroutines ...
	I0919 17:52:20.198122   46282 start.go:233] waiting for cluster config update ...
	I0919 17:52:20.198131   46282 start.go:242] writing updated cluster config ...
	I0919 17:52:20.198390   46282 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:20.260334   46282 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:20.262660   46282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-415555" cluster and "default" namespace by default
	I0919 17:52:15.493238   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:17.495147   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:19.497990   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:16.500634   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:19.572697   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.436229   45961 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:25.436332   45961 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:25.436448   45961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:25.436580   45961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:25.436693   45961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:25.436784   45961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:25.438740   45961 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:25.438831   45961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:25.438907   45961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:25.439035   45961 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:25.439117   45961 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:25.439225   45961 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:25.439306   45961 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:25.439378   45961 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:25.439455   45961 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:25.439554   45961 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:25.439646   45961 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:25.439692   45961 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:25.439759   45961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:25.439825   45961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:25.439892   45961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:25.439982   45961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:25.440068   45961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:25.440183   45961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:25.440276   45961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:25.441897   45961 out.go:204]   - Booting up control plane ...
	I0919 17:52:25.442005   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:25.442103   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:25.442163   45961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:25.442248   45961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:25.442343   45961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:25.442428   45961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:25.442641   45961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:25.442703   45961 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003935 seconds
	I0919 17:52:25.442819   45961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:25.442911   45961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:25.442959   45961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:25.443101   45961 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-215748 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:25.443144   45961 kubeadm.go:322] [bootstrap-token] Using token: xzx8bb.31rxl0d2e5l1asvj
	I0919 17:52:25.444479   45961 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:25.444574   45961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:25.444640   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:25.444747   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:25.444886   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:25.445049   45961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:25.445178   45961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:25.445344   45961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:25.445403   45961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:25.445462   45961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:25.445475   45961 kubeadm.go:322] 
	I0919 17:52:25.445558   45961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:25.445569   45961 kubeadm.go:322] 
	I0919 17:52:25.445659   45961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:25.445672   45961 kubeadm.go:322] 
	I0919 17:52:25.445691   45961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:25.445740   45961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:25.445779   45961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:25.445785   45961 kubeadm.go:322] 
	I0919 17:52:25.445824   45961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:25.445830   45961 kubeadm.go:322] 
	I0919 17:52:25.445873   45961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:25.445879   45961 kubeadm.go:322] 
	I0919 17:52:25.445939   45961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:25.446038   45961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:25.446154   45961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:25.446172   45961 kubeadm.go:322] 
	I0919 17:52:25.446275   45961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:25.446361   45961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:25.446371   45961 kubeadm.go:322] 
	I0919 17:52:25.446473   45961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.446594   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:25.446623   45961 kubeadm.go:322] 	--control-plane 
	I0919 17:52:25.446641   45961 kubeadm.go:322] 
	I0919 17:52:25.446774   45961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:25.446782   45961 kubeadm.go:322] 
	I0919 17:52:25.446874   45961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xzx8bb.31rxl0d2e5l1asvj \
	I0919 17:52:25.447044   45961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:25.447066   45961 cni.go:84] Creating CNI manager for ""
	I0919 17:52:25.447079   45961 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:25.448742   45961 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:21.994034   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:24.494339   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:25.656705   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:25.450147   45961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:25.473476   45961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:25.529295   45961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:25.529383   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.529387   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=no-preload-215748 minikube.k8s.io/updated_at=2023_09_19T17_52_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:25.625308   45961 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:25.905954   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.037543   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.638479   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.138484   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:27.637901   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.138033   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:28.638787   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:26.494798   45696 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:28.213192   45696 pod_ready.go:81] duration metric: took 4m0.001033854s waiting for pod "metrics-server-57f55c9bc5-5jqm8" in "kube-system" namespace to be "Ready" ...
	E0919 17:52:28.213226   45696 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:52:28.213243   45696 pod_ready.go:38] duration metric: took 4m12.067034727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:28.213266   45696 kubeadm.go:640] restartCluster took 4m32.254857032s
	W0919 17:52:28.213338   45696 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:52:28.213378   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:52:28.728646   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:29.138616   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:29.638381   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.138155   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:30.637984   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.137977   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:31.638547   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.138617   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:32.638253   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.138335   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:33.638302   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.804640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:34.138702   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:34.638549   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.138431   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:35.638642   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.138000   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:36.638726   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.138394   45961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:37.315805   45961 kubeadm.go:1081] duration metric: took 11.786488266s to wait for elevateKubeSystemPrivileges.
	I0919 17:52:37.315840   45961 kubeadm.go:406] StartCluster complete in 5m9.899215362s
	I0919 17:52:37.315856   45961 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.315945   45961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:52:37.317563   45961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:52:37.317815   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:52:37.317844   45961 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:52:37.317936   45961 addons.go:69] Setting storage-provisioner=true in profile "no-preload-215748"
	I0919 17:52:37.317943   45961 addons.go:69] Setting default-storageclass=true in profile "no-preload-215748"
	I0919 17:52:37.317959   45961 addons.go:231] Setting addon storage-provisioner=true in "no-preload-215748"
	I0919 17:52:37.317963   45961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-215748"
	W0919 17:52:37.317967   45961 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:52:37.317964   45961 addons.go:69] Setting metrics-server=true in profile "no-preload-215748"
	I0919 17:52:37.317988   45961 addons.go:231] Setting addon metrics-server=true in "no-preload-215748"
	W0919 17:52:37.318000   45961 addons.go:240] addon metrics-server should already be in state true
	I0919 17:52:37.318016   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318041   45961 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:52:37.318051   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.318380   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318407   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318416   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318429   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.318475   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.318495   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.334365   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0919 17:52:37.334822   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.335368   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.335395   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.335861   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.336052   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0919 17:52:37.337347   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0919 17:52:37.337998   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338047   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.338480   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338498   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338610   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.338632   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.338840   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.338941   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.339461   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339490   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.339536   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.339565   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.354064   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
	I0919 17:52:37.354482   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.354893   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.354912   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.355353   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.355578   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.357181   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.359063   45961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:52:37.357674   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0919 17:52:37.358308   45961 addons.go:231] Setting addon default-storageclass=true in "no-preload-215748"
	W0919 17:52:37.360428   45961 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:52:37.360461   45961 host.go:66] Checking if "no-preload-215748" exists ...
	I0919 17:52:37.360569   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:52:37.360583   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:52:37.360602   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.360832   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.360869   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.360891   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.361393   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.361411   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.361836   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.362040   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.363959   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.364124   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.365928   45961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:52:37.364551   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.364765   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.367579   45961 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.367592   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:52:37.367609   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.367639   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.367660   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.367827   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.368140   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.370800   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371215   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.371240   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.371416   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.371612   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.371777   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.371914   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.379222   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0919 17:52:37.379631   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.380097   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.380122   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.380481   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.381718   45961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:52:37.381754   45961 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:52:37.396647   45961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0919 17:52:37.397058   45961 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:52:37.397474   45961 main.go:141] libmachine: Using API Version  1
	I0919 17:52:37.397492   45961 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:52:37.397842   45961 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:52:37.397994   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetState
	I0919 17:52:37.399762   45961 main.go:141] libmachine: (no-preload-215748) Calling .DriverName
	I0919 17:52:37.400224   45961 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.400239   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:52:37.400255   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHHostname
	I0919 17:52:37.403299   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403745   45961 main.go:141] libmachine: (no-preload-215748) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:de:8a", ip: ""} in network mk-no-preload-215748: {Iface:virbr2 ExpiryTime:2023-09-19 18:47:00 +0000 UTC Type:0 Mac:52:54:00:ac:de:8a Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:no-preload-215748 Clientid:01:52:54:00:ac:de:8a}
	I0919 17:52:37.403767   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHPort
	I0919 17:52:37.403773   45961 main.go:141] libmachine: (no-preload-215748) DBG | domain no-preload-215748 has defined IP address 192.168.39.15 and MAC address 52:54:00:ac:de:8a in network mk-no-preload-215748
	I0919 17:52:37.403948   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHKeyPath
	I0919 17:52:37.404080   45961 main.go:141] libmachine: (no-preload-215748) Calling .GetSSHUsername
	I0919 17:52:37.404221   45961 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/no-preload-215748/id_rsa Username:docker}
	I0919 17:52:37.448139   45961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-215748" context rescaled to 1 replicas
	I0919 17:52:37.448183   45961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:52:37.450076   45961 out.go:177] * Verifying Kubernetes components...
	I0919 17:52:37.451036   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:37.579553   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:52:37.592116   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:52:37.604757   45961 node_ready.go:35] waiting up to 6m0s for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.605235   45961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:52:37.611496   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:52:37.611523   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:52:37.625762   45961 node_ready.go:49] node "no-preload-215748" has status "Ready":"True"
	I0919 17:52:37.625782   45961 node_ready.go:38] duration metric: took 20.997061ms waiting for node "no-preload-215748" to be "Ready" ...
	I0919 17:52:37.625790   45961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:37.638366   45961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.693993   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:52:37.694019   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:52:37.754746   45961 pod_ready.go:92] pod "etcd-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.754769   45961 pod_ready.go:81] duration metric: took 116.377819ms waiting for pod "etcd-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.754782   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.798115   45961 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:37.798139   45961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:52:37.815124   45961 pod_ready.go:92] pod "kube-apiserver-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.815192   45961 pod_ready.go:81] duration metric: took 60.393176ms waiting for pod "kube-apiserver-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.815218   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.922999   45961 pod_ready.go:92] pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:37.923022   45961 pod_ready.go:81] duration metric: took 107.794672ms waiting for pod "kube-controller-manager-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.923038   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:37.995437   45961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:52:39.961838   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.382243112s)
	I0919 17:52:39.961884   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961893   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.961902   45961 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.356635779s)
	I0919 17:52:39.961928   45961 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 17:52:39.961843   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.369699378s)
	I0919 17:52:39.961953   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.961963   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962202   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962219   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962231   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962239   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962348   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962409   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962447   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962490   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962517   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962540   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962553   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962563   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962526   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:39.962601   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:39.962778   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:39.962819   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962828   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962942   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:39.962959   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:39.962972   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064135   45961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.06864457s)
	I0919 17:52:40.064196   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064212   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064511   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064532   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064542   45961 main.go:141] libmachine: Making call to close driver server
	I0919 17:52:40.064552   45961 main.go:141] libmachine: (no-preload-215748) Calling .Close
	I0919 17:52:40.064775   45961 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:52:40.064835   45961 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:52:40.064840   45961 main.go:141] libmachine: (no-preload-215748) DBG | Closing plugin on server side
	I0919 17:52:40.064850   45961 addons.go:467] Verifying addon metrics-server=true in "no-preload-215748"
	I0919 17:52:40.066741   45961 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0919 17:52:37.876720   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:40.068231   45961 addons.go:502] enable addons completed in 2.750388313s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0919 17:52:40.249105   45961 pod_ready.go:102] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"False"
	I0919 17:52:40.760507   45961 pod_ready.go:92] pod "kube-proxy-hk6k2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.760532   45961 pod_ready.go:81] duration metric: took 2.837485326s waiting for pod "kube-proxy-hk6k2" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.760546   45961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770519   45961 pod_ready.go:92] pod "kube-scheduler-no-preload-215748" in "kube-system" namespace has status "Ready":"True"
	I0919 17:52:40.770574   45961 pod_ready.go:81] duration metric: took 9.988955ms waiting for pod "kube-scheduler-no-preload-215748" in "kube-system" namespace to be "Ready" ...
	I0919 17:52:40.770610   45961 pod_ready.go:38] duration metric: took 3.144808421s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:52:40.770630   45961 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:52:40.770686   45961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:52:40.806513   45961 api_server.go:72] duration metric: took 3.358300901s to wait for apiserver process to appear ...
	I0919 17:52:40.806538   45961 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:52:40.806556   45961 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0919 17:52:40.812758   45961 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0919 17:52:40.813960   45961 api_server.go:141] control plane version: v1.28.2
	I0919 17:52:40.813985   45961 api_server.go:131] duration metric: took 7.436946ms to wait for apiserver health ...
	I0919 17:52:40.813996   45961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:52:40.821498   45961 system_pods.go:59] 8 kube-system pods found
	I0919 17:52:40.821525   45961 system_pods.go:61] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:40.821536   45961 system_pods.go:61] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:40.821543   45961 system_pods.go:61] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:40.821549   45961 system_pods.go:61] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:40.821555   45961 system_pods.go:61] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:40.821563   45961 system_pods.go:61] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:40.821572   45961 system_pods.go:61] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:40.821583   45961 system_pods.go:61] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:40.821599   45961 system_pods.go:74] duration metric: took 7.595377ms to wait for pod list to return data ...
	I0919 17:52:40.821608   45961 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:52:40.828423   45961 default_sa.go:45] found service account: "default"
	I0919 17:52:40.828446   45961 default_sa.go:55] duration metric: took 6.830774ms for default service account to be created ...
	I0919 17:52:40.828455   45961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:52:41.018524   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.018560   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.018569   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.018578   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.018585   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.018591   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.018601   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.018612   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.018625   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.018645   45961 retry.go:31] will retry after 307.254812ms: missing components: kube-dns
	I0919 17:52:41.337815   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.337844   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.337851   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.337856   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.337863   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.337869   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.337875   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.337883   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.337893   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.337915   45961 retry.go:31] will retry after 378.465105ms: missing components: kube-dns
	I0919 17:52:41.734680   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:41.734717   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:52:41.734728   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:41.734736   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:41.734743   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:41.734750   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:41.734757   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:41.734765   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:41.734780   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:52:41.734801   45961 retry.go:31] will retry after 432.849904ms: missing components: kube-dns
	I0919 17:52:42.176510   45961 system_pods.go:86] 8 kube-system pods found
	I0919 17:52:42.176536   45961 system_pods.go:89] "coredns-5dd5756b68-n478x" [9824f626-4a8b-485e-aa70-d88b5ebfb085] Running
	I0919 17:52:42.176545   45961 system_pods.go:89] "etcd-no-preload-215748" [2b32f52c-fa0b-4832-b7b3-eb96c31662bb] Running
	I0919 17:52:42.176552   45961 system_pods.go:89] "kube-apiserver-no-preload-215748" [5644dbf0-6621-47a0-924e-6b570d8618cb] Running
	I0919 17:52:42.176559   45961 system_pods.go:89] "kube-controller-manager-no-preload-215748" [66e1dbd6-d5e7-4523-bc9b-f096b0d17031] Running
	I0919 17:52:42.176569   45961 system_pods.go:89] "kube-proxy-hk6k2" [1512b039-4c8e-45bc-bbca-82215ea569eb] Running
	I0919 17:52:42.176576   45961 system_pods.go:89] "kube-scheduler-no-preload-215748" [07d0f2cd-93d4-4047-b242-445443b972ef] Running
	I0919 17:52:42.176590   45961 system_pods.go:89] "metrics-server-57f55c9bc5-nwxvc" [af38e00c-58bc-455a-bd3e-b9e24ae26d20] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:52:42.176603   45961 system_pods.go:89] "storage-provisioner" [6c8d577d-e182-4428-bb14-10b679241771] Running
	I0919 17:52:42.176616   45961 system_pods.go:126] duration metric: took 1.348155168s to wait for k8s-apps to be running ...
	I0919 17:52:42.176628   45961 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:52:42.176683   45961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:42.189952   45961 system_svc.go:56] duration metric: took 13.312874ms WaitForService to wait for kubelet.
	I0919 17:52:42.189981   45961 kubeadm.go:581] duration metric: took 4.741777133s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:52:42.190012   45961 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:52:42.194919   45961 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:52:42.194945   45961 node_conditions.go:123] node cpu capacity is 2
	I0919 17:52:42.194957   45961 node_conditions.go:105] duration metric: took 4.939533ms to run NodePressure ...
	I0919 17:52:42.194969   45961 start.go:228] waiting for startup goroutines ...
	I0919 17:52:42.194978   45961 start.go:233] waiting for cluster config update ...
	I0919 17:52:42.194988   45961 start.go:242] writing updated cluster config ...
	I0919 17:52:42.195287   45961 ssh_runner.go:195] Run: rm -f paused
	I0919 17:52:42.245669   45961 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:52:42.248021   45961 out.go:177] * Done! kubectl is now configured to use "no-preload-215748" cluster and "default" namespace by default
	I0919 17:52:41.936906   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.723493225s)
	I0919 17:52:41.936983   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:52:41.951451   45696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:52:41.960478   45696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:52:41.968960   45696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:52:41.969031   45696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:52:42.019868   45696 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:52:42.020027   45696 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:52:42.171083   45696 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:52:42.171221   45696 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:52:42.171332   45696 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:52:42.429760   45696 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:52:42.431619   45696 out.go:204]   - Generating certificates and keys ...
	I0919 17:52:42.431770   45696 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:52:42.431870   45696 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:52:42.431973   45696 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:52:42.432172   45696 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:52:42.432781   45696 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:52:42.433451   45696 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:52:42.434353   45696 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:52:42.435577   45696 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:52:42.436820   45696 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:52:42.438302   45696 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:52:42.439391   45696 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:52:42.439509   45696 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:52:42.929570   45696 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:52:43.332709   45696 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:52:43.433651   45696 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:52:43.695104   45696 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:52:43.696103   45696 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:52:43.699874   45696 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:52:43.701784   45696 out.go:204]   - Booting up control plane ...
	I0919 17:52:43.701926   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:52:43.702063   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:52:43.702819   45696 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:52:43.724659   45696 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:52:43.725576   45696 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:52:43.725671   45696 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:52:43.851582   45696 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:52:43.960637   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:47.032663   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:51.355564   45696 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504191 seconds
	I0919 17:52:51.355695   45696 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:52:51.376627   45696 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:52:51.908759   45696 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:52:51.909064   45696 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-415155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:52:52.424367   45696 kubeadm.go:322] [bootstrap-token] Using token: kntdz4.46i9d2q57hx70gnb
	I0919 17:52:52.425876   45696 out.go:204]   - Configuring RBAC rules ...
	I0919 17:52:52.425993   45696 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:52:52.433647   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:52:52.443514   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:52:52.447239   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:52:52.453258   45696 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:52:52.459432   45696 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:52:52.475208   45696 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:52:52.722848   45696 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:52:52.841255   45696 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:52:52.841280   45696 kubeadm.go:322] 
	I0919 17:52:52.841356   45696 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:52:52.841369   45696 kubeadm.go:322] 
	I0919 17:52:52.841456   45696 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:52:52.841464   45696 kubeadm.go:322] 
	I0919 17:52:52.841502   45696 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:52:52.841568   45696 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:52:52.841637   45696 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:52:52.841648   45696 kubeadm.go:322] 
	I0919 17:52:52.841698   45696 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:52:52.841704   45696 kubeadm.go:322] 
	I0919 17:52:52.841745   45696 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:52:52.841780   45696 kubeadm.go:322] 
	I0919 17:52:52.841875   45696 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:52:52.841942   45696 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:52:52.842039   45696 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:52:52.842048   45696 kubeadm.go:322] 
	I0919 17:52:52.842134   45696 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:52:52.842243   45696 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:52:52.842262   45696 kubeadm.go:322] 
	I0919 17:52:52.842358   45696 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842491   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 17:52:52.842523   45696 kubeadm.go:322] 	--control-plane 
	I0919 17:52:52.842530   45696 kubeadm.go:322] 
	I0919 17:52:52.842645   45696 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:52:52.842659   45696 kubeadm.go:322] 
	I0919 17:52:52.842773   45696 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kntdz4.46i9d2q57hx70gnb \
	I0919 17:52:52.842930   45696 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 17:52:52.844420   45696 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:52:52.844450   45696 cni.go:84] Creating CNI manager for ""
	I0919 17:52:52.844461   45696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:52:52.846322   45696 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:52:52.848269   45696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:52:52.875578   45696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:52:52.905183   45696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:52:52.905261   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.905281   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=embed-certs-415155 minikube.k8s.io/updated_at=2023_09_19T17_52_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:52.993717   45696 ops.go:34] apiserver oom_adj: -16
	I0919 17:52:53.208727   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.311165   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.904182   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.403711   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:54.904152   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:55.404377   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:53.108640   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:52:55.903772   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.404320   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.904201   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.403637   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:57.904174   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.404553   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:58.903691   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.403716   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:59.903872   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:00.403725   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:52:56.180664   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:00.904540   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.404211   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:01.903897   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.403857   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.903841   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.404601   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:03.904222   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.404483   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:04.903813   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:05.404474   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:02.260629   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.332731   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:05.904337   45696 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:53:06.003333   45696 kubeadm.go:1081] duration metric: took 13.098131801s to wait for elevateKubeSystemPrivileges.
	I0919 17:53:06.003365   45696 kubeadm.go:406] StartCluster complete in 5m10.10389936s
	I0919 17:53:06.003387   45696 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.003476   45696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:53:06.005541   45696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:53:06.005772   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:53:06.005785   45696 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:53:06.005854   45696 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-415155"
	I0919 17:53:06.005877   45696 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-415155"
	W0919 17:53:06.005884   45696 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:53:06.005926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.005930   45696 addons.go:69] Setting default-storageclass=true in profile "embed-certs-415155"
	I0919 17:53:06.005946   45696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-415155"
	I0919 17:53:06.005979   45696 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:53:06.005982   45696 addons.go:69] Setting metrics-server=true in profile "embed-certs-415155"
	I0919 17:53:06.006009   45696 addons.go:231] Setting addon metrics-server=true in "embed-certs-415155"
	W0919 17:53:06.006026   45696 addons.go:240] addon metrics-server should already be in state true
	I0919 17:53:06.006071   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.006331   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006328   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006364   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006396   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.006451   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.006493   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.023141   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43557
	I0919 17:53:06.023485   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0919 17:53:06.023646   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34719
	I0919 17:53:06.023657   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.023882   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024040   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.024209   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024230   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024333   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024358   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.024616   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024697   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.024810   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.024827   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.025260   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.025301   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.025486   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.025695   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.026032   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.026062   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.044712   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40527
	I0919 17:53:06.045176   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.045627   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.045646   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.045976   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.046161   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.047603   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.049519   45696 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:53:06.047878   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0919 17:53:06.052909   45696 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.052922   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:53:06.052937   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.053277   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.053868   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.053887   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.054337   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.054580   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.056666   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.056710   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.058604   45696 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:53:06.057084   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.057313   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.060027   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.060046   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:53:06.060060   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:53:06.060079   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.060210   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.060497   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.060815   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.062794   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063165   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.063196   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.063327   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.063475   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.063593   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.063701   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.066891   45696 addons.go:231] Setting addon default-storageclass=true in "embed-certs-415155"
	W0919 17:53:06.066905   45696 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:53:06.066926   45696 host.go:66] Checking if "embed-certs-415155" exists ...
	I0919 17:53:06.066965   45696 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-415155" context rescaled to 1 replicas
	I0919 17:53:06.066987   45696 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.6 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 17:53:06.068622   45696 out.go:177] * Verifying Kubernetes components...
	I0919 17:53:06.067176   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.070241   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.070253   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:06.085010   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0919 17:53:06.085392   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.085940   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.085976   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.086322   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.086774   45696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:53:06.086820   45696 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:53:06.101494   45696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0919 17:53:06.101938   45696 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:53:06.102528   45696 main.go:141] libmachine: Using API Version  1
	I0919 17:53:06.102552   45696 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:53:06.103014   45696 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:53:06.103256   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetState
	I0919 17:53:06.104793   45696 main.go:141] libmachine: (embed-certs-415155) Calling .DriverName
	I0919 17:53:06.105087   45696 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.105107   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:53:06.105127   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHHostname
	I0919 17:53:06.107742   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108073   45696 main.go:141] libmachine: (embed-certs-415155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:77:01", ip: ""} in network mk-embed-certs-415155: {Iface:virbr4 ExpiryTime:2023-09-19 18:47:39 +0000 UTC Type:0 Mac:52:54:00:da:77:01 Iaid: IPaddr:192.168.50.6 Prefix:24 Hostname:embed-certs-415155 Clientid:01:52:54:00:da:77:01}
	I0919 17:53:06.108105   45696 main.go:141] libmachine: (embed-certs-415155) DBG | domain embed-certs-415155 has defined IP address 192.168.50.6 and MAC address 52:54:00:da:77:01 in network mk-embed-certs-415155
	I0919 17:53:06.108336   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHPort
	I0919 17:53:06.108547   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHKeyPath
	I0919 17:53:06.108744   45696 main.go:141] libmachine: (embed-certs-415155) Calling .GetSSHUsername
	I0919 17:53:06.108908   45696 sshutil.go:53] new ssh client: &{IP:192.168.50.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/embed-certs-415155/id_rsa Username:docker}
	I0919 17:53:06.205454   45696 node_ready.go:35] waiting up to 6m0s for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.205565   45696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:53:06.225929   45696 node_ready.go:49] node "embed-certs-415155" has status "Ready":"True"
	I0919 17:53:06.225949   45696 node_ready.go:38] duration metric: took 20.464817ms waiting for node "embed-certs-415155" to be "Ready" ...
	I0919 17:53:06.225957   45696 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:06.251954   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:53:06.251981   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:53:06.269198   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:53:06.296923   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:53:06.314108   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:53:06.314141   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:53:06.338106   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:06.378123   45696 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:06.378154   45696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:53:06.492313   45696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:53:08.235564   45696 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.029959877s)
	I0919 17:53:08.235599   45696 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0919 17:53:08.597917   45696 pod_ready.go:102] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"False"
	I0919 17:53:08.741920   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.44495643s)
	I0919 17:53:08.741982   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.741995   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.741926   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.472691573s)
	I0919 17:53:08.742031   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742050   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742377   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742393   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742403   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742413   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742492   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.742542   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742555   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742566   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742576   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742617   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742630   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.742643   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.742651   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.742771   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.742785   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.744274   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.744297   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818418   45696 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.326058126s)
	I0919 17:53:08.818472   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818486   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.818839   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.818891   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.818927   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.818938   45696 main.go:141] libmachine: Making call to close driver server
	I0919 17:53:08.818948   45696 main.go:141] libmachine: (embed-certs-415155) Calling .Close
	I0919 17:53:08.820442   45696 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:53:08.820464   45696 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:53:08.820474   45696 addons.go:467] Verifying addon metrics-server=true in "embed-certs-415155"
	I0919 17:53:08.820479   45696 main.go:141] libmachine: (embed-certs-415155) DBG | Closing plugin on server side
	I0919 17:53:08.822508   45696 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 17:53:08.824220   45696 addons.go:502] enable addons completed in 2.818433307s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 17:53:10.561437   45696 pod_ready.go:92] pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.561462   45696 pod_ready.go:81] duration metric: took 4.223330172s waiting for pod "coredns-5dd5756b68-2dbbk" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.561472   45696 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568541   45696 pod_ready.go:92] pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.568566   45696 pod_ready.go:81] duration metric: took 7.086927ms waiting for pod "coredns-5dd5756b68-6gswl" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.568579   45696 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577684   45696 pod_ready.go:92] pod "etcd-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.577709   45696 pod_ready.go:81] duration metric: took 9.120912ms waiting for pod "etcd-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.577722   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585005   45696 pod_ready.go:92] pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.585033   45696 pod_ready.go:81] duration metric: took 7.302173ms waiting for pod "kube-apiserver-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.585043   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590934   45696 pod_ready.go:92] pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:10.590951   45696 pod_ready.go:81] duration metric: took 5.90203ms waiting for pod "kube-controller-manager-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:10.590960   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358510   45696 pod_ready.go:92] pod "kube-proxy-b75j2" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.358535   45696 pod_ready.go:81] duration metric: took 767.569086ms waiting for pod "kube-proxy-b75j2" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.358544   45696 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759839   45696 pod_ready.go:92] pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace has status "Ready":"True"
	I0919 17:53:11.759863   45696 pod_ready.go:81] duration metric: took 401.313058ms waiting for pod "kube-scheduler-embed-certs-415155" in "kube-system" namespace to be "Ready" ...
	I0919 17:53:11.759872   45696 pod_ready.go:38] duration metric: took 5.533896789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:53:11.759887   45696 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:53:11.759933   45696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:53:11.773700   45696 api_server.go:72] duration metric: took 5.706687251s to wait for apiserver process to appear ...
	I0919 17:53:11.773730   45696 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:53:11.773747   45696 api_server.go:253] Checking apiserver healthz at https://192.168.50.6:8443/healthz ...
	I0919 17:53:11.784435   45696 api_server.go:279] https://192.168.50.6:8443/healthz returned 200:
	ok
	I0919 17:53:11.785929   45696 api_server.go:141] control plane version: v1.28.2
	I0919 17:53:11.785952   45696 api_server.go:131] duration metric: took 12.214361ms to wait for apiserver health ...
	I0919 17:53:11.785971   45696 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:53:11.961906   45696 system_pods.go:59] 9 kube-system pods found
	I0919 17:53:11.961937   45696 system_pods.go:61] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:11.961945   45696 system_pods.go:61] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:11.961952   45696 system_pods.go:61] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:11.961959   45696 system_pods.go:61] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:11.961967   45696 system_pods.go:61] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:11.961973   45696 system_pods.go:61] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:11.961981   45696 system_pods.go:61] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:11.961991   45696 system_pods.go:61] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:11.962003   45696 system_pods.go:61] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:11.962013   45696 system_pods.go:74] duration metric: took 176.035985ms to wait for pod list to return data ...
	I0919 17:53:11.962027   45696 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:53:12.157305   45696 default_sa.go:45] found service account: "default"
	I0919 17:53:12.157328   45696 default_sa.go:55] duration metric: took 195.295342ms for default service account to be created ...
	I0919 17:53:12.157336   45696 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:53:12.359884   45696 system_pods.go:86] 9 kube-system pods found
	I0919 17:53:12.359910   45696 system_pods.go:89] "coredns-5dd5756b68-2dbbk" [93175ebd-b717-4c98-a56b-aca1404ac8bd] Running
	I0919 17:53:12.359916   45696 system_pods.go:89] "coredns-5dd5756b68-6gswl" [6dee60e5-fa0a-4b65-b4d1-dbb0fff29d3f] Running
	I0919 17:53:12.359920   45696 system_pods.go:89] "etcd-embed-certs-415155" [47f8759f-be32-4f38-9bc4-c9a0578b6303] Running
	I0919 17:53:12.359924   45696 system_pods.go:89] "kube-apiserver-embed-certs-415155" [8682e140-4951-4ea4-a7df-5b1324e33094] Running
	I0919 17:53:12.359929   45696 system_pods.go:89] "kube-controller-manager-embed-certs-415155" [00572708-93f6-4bf5-b6c4-83e9c588b071] Running
	I0919 17:53:12.359932   45696 system_pods.go:89] "kube-proxy-b75j2" [7be05aae-86ca-4640-a0f3-6518e7896711] Running
	I0919 17:53:12.359936   45696 system_pods.go:89] "kube-scheduler-embed-certs-415155" [4532a79d-a306-46a9-a0aa-d1f48b90645c] Running
	I0919 17:53:12.359943   45696 system_pods.go:89] "metrics-server-57f55c9bc5-kdxsz" [1588f0a7-18ae-402b-8916-e3a6423e9e15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:53:12.359948   45696 system_pods.go:89] "storage-provisioner" [83e9eb53-dd92-4b84-a787-82bea5449cd2] Running
	I0919 17:53:12.359956   45696 system_pods.go:126] duration metric: took 202.614357ms to wait for k8s-apps to be running ...
	I0919 17:53:12.359962   45696 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:53:12.359999   45696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:53:12.373545   45696 system_svc.go:56] duration metric: took 13.572497ms WaitForService to wait for kubelet.
	I0919 17:53:12.373579   45696 kubeadm.go:581] duration metric: took 6.30657382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:53:12.373607   45696 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:53:12.557409   45696 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:53:12.557435   45696 node_conditions.go:123] node cpu capacity is 2
	I0919 17:53:12.557444   45696 node_conditions.go:105] duration metric: took 183.83246ms to run NodePressure ...
	I0919 17:53:12.557455   45696 start.go:228] waiting for startup goroutines ...
	I0919 17:53:12.557461   45696 start.go:233] waiting for cluster config update ...
	I0919 17:53:12.557469   45696 start.go:242] writing updated cluster config ...
	I0919 17:53:12.557699   45696 ssh_runner.go:195] Run: rm -f paused
	I0919 17:53:12.605145   45696 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:53:12.607197   45696 out.go:177] * Done! kubectl is now configured to use "embed-certs-415155" cluster and "default" namespace by default
	I0919 17:53:11.412630   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:14.488732   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:20.564623   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:23.636680   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:29.716717   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:32.788701   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:38.868669   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:41.940647   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:48.020643   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:51.092656   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:53:57.172691   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:00.244719   47798 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.182:22: connect: no route to host
	I0919 17:54:03.245602   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:03.245640   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:03.247321   47798 machine.go:91] provisioned docker machine in 4m37.423277683s
	I0919 17:54:03.247365   47798 fix.go:56] fixHost completed within 4m37.445374366s
	I0919 17:54:03.247373   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 4m37.445391375s
	W0919 17:54:03.247389   47798 start.go:688] error starting host: provision: host is not running
	W0919 17:54:03.247488   47798 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0919 17:54:03.247503   47798 start.go:703] Will try again in 5 seconds ...
	I0919 17:54:08.249214   47798 start.go:365] acquiring machines lock for old-k8s-version-100627: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:54:08.249335   47798 start.go:369] acquired machines lock for "old-k8s-version-100627" in 79.973µs
	I0919 17:54:08.249367   47798 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:54:08.249377   47798 fix.go:54] fixHost starting: 
	I0919 17:54:08.249707   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:54:08.249734   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:54:08.264866   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I0919 17:54:08.265315   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:54:08.265726   47798 main.go:141] libmachine: Using API Version  1
	I0919 17:54:08.265759   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:54:08.266072   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:54:08.266269   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:08.266419   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 17:54:08.267941   47798 fix.go:102] recreateIfNeeded on old-k8s-version-100627: state=Stopped err=<nil>
	I0919 17:54:08.267960   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	W0919 17:54:08.268118   47798 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:54:08.269915   47798 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-100627" ...
	I0919 17:54:08.271210   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Start
	I0919 17:54:08.271445   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring networks are active...
	I0919 17:54:08.272016   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network default is active
	I0919 17:54:08.272329   47798 main.go:141] libmachine: (old-k8s-version-100627) Ensuring network mk-old-k8s-version-100627 is active
	I0919 17:54:08.272743   47798 main.go:141] libmachine: (old-k8s-version-100627) Getting domain xml...
	I0919 17:54:08.273350   47798 main.go:141] libmachine: (old-k8s-version-100627) Creating domain...
	I0919 17:54:09.557879   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting to get IP...
	I0919 17:54:09.558718   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.559190   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.559270   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.559173   48693 retry.go:31] will retry after 309.613104ms: waiting for machine to come up
	I0919 17:54:09.870868   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:09.871472   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:09.871496   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:09.871435   48693 retry.go:31] will retry after 375.744574ms: waiting for machine to come up
	I0919 17:54:10.249255   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.249750   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.249780   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.249702   48693 retry.go:31] will retry after 305.257713ms: waiting for machine to come up
	I0919 17:54:10.556042   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.556587   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.556621   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.556510   48693 retry.go:31] will retry after 394.207165ms: waiting for machine to come up
	I0919 17:54:10.952178   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:10.952797   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:10.952828   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:10.952732   48693 retry.go:31] will retry after 706.704251ms: waiting for machine to come up
	I0919 17:54:11.660566   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:11.661038   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:11.661061   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:11.660988   48693 retry.go:31] will retry after 924.155076ms: waiting for machine to come up
	I0919 17:54:12.586278   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:12.586772   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:12.586805   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:12.586721   48693 retry.go:31] will retry after 1.035300526s: waiting for machine to come up
	I0919 17:54:13.623123   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:13.623597   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:13.623622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:13.623562   48693 retry.go:31] will retry after 1.060639157s: waiting for machine to come up
	I0919 17:54:14.685531   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:14.686012   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:14.686044   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:14.685973   48693 retry.go:31] will retry after 1.61320677s: waiting for machine to come up
	I0919 17:54:16.301447   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:16.301908   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:16.301957   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:16.301864   48693 retry.go:31] will retry after 2.031293541s: waiting for machine to come up
	I0919 17:54:18.334791   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:18.335384   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:18.335440   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:18.335329   48693 retry.go:31] will retry after 1.861837572s: waiting for machine to come up
	I0919 17:54:20.199546   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:20.200058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:20.200088   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:20.200009   48693 retry.go:31] will retry after 2.332364238s: waiting for machine to come up
	I0919 17:54:22.533654   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:22.534131   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | unable to find current IP address of domain old-k8s-version-100627 in network mk-old-k8s-version-100627
	I0919 17:54:22.534162   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | I0919 17:54:22.534071   48693 retry.go:31] will retry after 4.475201998s: waiting for machine to come up
	I0919 17:54:27.013553   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014052   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has current primary IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.014075   47798 main.go:141] libmachine: (old-k8s-version-100627) Found IP for machine: 192.168.72.182
	I0919 17:54:27.014091   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserving static IP address...
	I0919 17:54:27.014512   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.014535   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | skip adding static IP to network mk-old-k8s-version-100627 - found existing host DHCP lease matching {name: "old-k8s-version-100627", mac: "52:54:00:ee:1d:e7", ip: "192.168.72.182"}
	I0919 17:54:27.014560   47798 main.go:141] libmachine: (old-k8s-version-100627) Reserved static IP address: 192.168.72.182
	I0919 17:54:27.014579   47798 main.go:141] libmachine: (old-k8s-version-100627) Waiting for SSH to be available...
	I0919 17:54:27.014592   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Getting to WaitForSSH function...
	I0919 17:54:27.016929   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017394   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.017431   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.017594   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH client type: external
	I0919 17:54:27.017634   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa (-rw-------)
	I0919 17:54:27.017678   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:54:27.017700   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | About to run SSH command:
	I0919 17:54:27.017711   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | exit 0
	I0919 17:54:27.112557   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | SSH cmd err, output: <nil>: 
	I0919 17:54:27.112933   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetConfigRaw
	I0919 17:54:27.113574   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.116176   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116556   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.116581   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.116841   47798 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/config.json ...
	I0919 17:54:27.117019   47798 machine.go:88] provisioning docker machine ...
	I0919 17:54:27.117036   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:27.117261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117429   47798 buildroot.go:166] provisioning hostname "old-k8s-version-100627"
	I0919 17:54:27.117447   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.117599   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.119667   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.119987   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.120020   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.120131   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.120278   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120442   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.120625   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.120795   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.121114   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.121128   47798 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-100627 && echo "old-k8s-version-100627" | sudo tee /etc/hostname
	I0919 17:54:27.264601   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-100627
	
	I0919 17:54:27.264628   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.267433   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.267871   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.267906   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.268044   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.268260   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268459   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.268589   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.268764   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.269227   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.269258   47798 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-100627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-100627/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-100627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:54:27.408513   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:54:27.408544   47798 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 17:54:27.408566   47798 buildroot.go:174] setting up certificates
	I0919 17:54:27.408590   47798 provision.go:83] configureAuth start
	I0919 17:54:27.408607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetMachineName
	I0919 17:54:27.408923   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:27.411896   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412345   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.412376   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.412595   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.414909   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415293   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.415331   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.415417   47798 provision.go:138] copyHostCerts
	I0919 17:54:27.415479   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 17:54:27.415491   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 17:54:27.415556   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 17:54:27.415662   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 17:54:27.415675   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 17:54:27.415721   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 17:54:27.415941   47798 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 17:54:27.415954   47798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 17:54:27.415990   47798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 17:54:27.416043   47798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-100627 san=[192.168.72.182 192.168.72.182 localhost 127.0.0.1 minikube old-k8s-version-100627]
	I0919 17:54:27.473903   47798 provision.go:172] copyRemoteCerts
	I0919 17:54:27.473953   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:54:27.473978   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.476857   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477234   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.477272   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.477453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.477649   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.477818   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.477957   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:27.578694   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 17:54:27.603580   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:54:27.629314   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:54:27.653764   47798 provision.go:86] duration metric: configureAuth took 245.159127ms
	I0919 17:54:27.653788   47798 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:54:27.653989   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 17:54:27.654081   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:27.656608   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657058   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:27.657113   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:27.657286   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:27.657453   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657605   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:27.657785   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:27.657972   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:27.658276   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:27.658292   47798 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 17:54:28.000190   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 17:54:28.000238   47798 machine.go:91] provisioned docker machine in 883.206741ms
	I0919 17:54:28.000251   47798 start.go:300] post-start starting for "old-k8s-version-100627" (driver="kvm2")
	I0919 17:54:28.000265   47798 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:54:28.000288   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.000617   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:54:28.000650   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.003541   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.003980   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.004027   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.004182   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.004383   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.004583   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.004749   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.099219   47798 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:54:28.103738   47798 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:54:28.103766   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 17:54:28.103853   47798 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 17:54:28.103953   47798 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 17:54:28.104066   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:54:28.115827   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:28.139080   47798 start.go:303] post-start completed in 138.802144ms
	I0919 17:54:28.139102   47798 fix.go:56] fixHost completed within 19.88972528s
	I0919 17:54:28.139121   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.141760   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142169   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.142195   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.142396   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.142607   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142726   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.142917   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.143114   47798 main.go:141] libmachine: Using SSH client type: native
	I0919 17:54:28.143573   47798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.182 22 <nil> <nil>}
	I0919 17:54:28.143592   47798 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:54:28.277495   47798 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695146068.223192427
	
	I0919 17:54:28.277520   47798 fix.go:206] guest clock: 1695146068.223192427
	I0919 17:54:28.277530   47798 fix.go:219] Guest: 2023-09-19 17:54:28.223192427 +0000 UTC Remote: 2023-09-19 17:54:28.139105122 +0000 UTC m=+302.480491248 (delta=84.087305ms)
	I0919 17:54:28.277553   47798 fix.go:190] guest clock delta is within tolerance: 84.087305ms
	I0919 17:54:28.277559   47798 start.go:83] releasing machines lock for "old-k8s-version-100627", held for 20.02820818s
	I0919 17:54:28.277581   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.277863   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:28.280976   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281274   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.281314   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.281491   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282065   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282261   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 17:54:28.282362   47798 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:54:28.282425   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.282518   47798 ssh_runner.go:195] Run: cat /version.json
	I0919 17:54:28.282557   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 17:54:28.285235   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285574   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285626   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.285660   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.285758   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.285980   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286009   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:28.286037   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:28.286133   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286185   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 17:54:28.286298   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.286345   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 17:54:28.286479   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 17:54:28.286613   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 17:54:28.377342   47798 ssh_runner.go:195] Run: systemctl --version
	I0919 17:54:28.402900   47798 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 17:54:28.551979   47798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:54:28.558949   47798 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:54:28.559040   47798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:54:28.574671   47798 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:54:28.574707   47798 start.go:469] detecting cgroup driver to use...
	I0919 17:54:28.574789   47798 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:54:28.589301   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:54:28.603381   47798 docker.go:196] disabling cri-docker service (if available) ...
	I0919 17:54:28.603456   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 17:54:28.616574   47798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 17:54:28.630029   47798 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 17:54:28.735665   47798 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 17:54:28.855576   47798 docker.go:212] disabling docker service ...
	I0919 17:54:28.855656   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 17:54:28.869977   47798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 17:54:28.883344   47798 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 17:54:29.010033   47798 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 17:54:29.123737   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 17:54:29.136560   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:54:29.153418   47798 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0919 17:54:29.153472   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.164328   47798 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 17:54:29.164376   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.175468   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.186361   47798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 17:54:29.197606   47798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:54:29.209144   47798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:54:29.219566   47798 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 17:54:29.219608   47798 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 17:54:29.232771   47798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:54:29.241491   47798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:54:29.363253   47798 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 17:54:29.564774   47798 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 17:54:29.564853   47798 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 17:54:29.570170   47798 start.go:537] Will wait 60s for crictl version
	I0919 17:54:29.570236   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:29.574361   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:54:29.613496   47798 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 17:54:29.613591   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.668331   47798 ssh_runner.go:195] Run: crio --version
	I0919 17:54:29.724060   47798 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0919 17:54:29.725565   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetIP
	I0919 17:54:29.728603   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729060   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 17:54:29.729090   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 17:54:29.729325   47798 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0919 17:54:29.733860   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:29.745878   47798 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 17:54:29.745937   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:29.783853   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:29.783912   47798 ssh_runner.go:195] Run: which lz4
	I0919 17:54:29.787843   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:54:29.792095   47798 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:54:29.792124   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0919 17:54:31.578682   47798 crio.go:444] Took 1.790863 seconds to copy over tarball
	I0919 17:54:31.578766   47798 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:54:34.491190   47798 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.912396501s)
	I0919 17:54:34.491218   47798 crio.go:451] Took 2.912514 seconds to extract the tarball
	I0919 17:54:34.491227   47798 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:54:34.532896   47798 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 17:54:34.584238   47798 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0919 17:54:34.584259   47798 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:54:34.584318   47798 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.584343   47798 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:54:34.584357   47798 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.584378   47798 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.584540   47798 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.584551   47798 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.584565   47798 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.584321   47798 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:34.586227   47798 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.586253   47798 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.586228   47798 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.586234   47798 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:54:34.586352   47798 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.586266   47798 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:34.586581   47798 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.759785   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0919 17:54:34.802920   47798 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0919 17:54:34.802955   47798 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0919 17:54:34.803013   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:34.807458   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0919 17:54:34.847013   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0919 17:54:34.847128   47798 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852501   47798 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0919 17:54:34.852523   47798 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.852579   47798 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0919 17:54:34.853807   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0919 17:54:34.857117   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:34.858504   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:34.859676   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:34.868306   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:34.920560   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:35.645907   47798 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:54:37.386271   47798 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.533664793s)
	I0919 17:54:37.386302   47798 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0919 17:54:37.386337   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2: (2.532490506s)
	I0919 17:54:37.386377   47798 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0919 17:54:37.386391   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0: (2.529252811s)
	I0919 17:54:37.386410   47798 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.386437   47798 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0919 17:54:37.386458   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386462   47798 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.386469   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0: (2.527943734s)
	I0919 17:54:37.386508   47798 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0919 17:54:37.386516   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386529   47798 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.386549   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0: (2.526835511s)
	I0919 17:54:37.386581   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0: (2.518230422s)
	I0919 17:54:37.386605   47798 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0919 17:54:37.386609   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0: (2.466014033s)
	I0919 17:54:37.386609   47798 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0919 17:54:37.386628   47798 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.386629   47798 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.386638   47798 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0919 17:54:37.386566   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386659   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386662   47798 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.386765   47798 ssh_runner.go:195] Run: which crictl
	I0919 17:54:37.386701   47798 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.740765346s)
	I0919 17:54:37.399029   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0919 17:54:37.399077   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:54:37.399121   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:54:37.399122   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:54:37.402150   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:54:37.402313   47798 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0919 17:54:37.540994   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0919 17:54:37.541026   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0919 17:54:37.541059   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0919 17:54:37.541106   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0919 17:54:37.541145   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0919 17:54:37.549028   47798 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0919 17:54:37.549081   47798 cache_images.go:92] LoadImages completed in 2.964810789s
	W0919 17:54:37.549147   47798 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-6042/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0919 17:54:37.549230   47798 ssh_runner.go:195] Run: crio config
	I0919 17:54:37.603915   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:37.603954   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:37.603977   47798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:54:37.604007   47798 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.182 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-100627 NodeName:old-k8s-version-100627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 17:54:37.604180   47798 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-100627"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-100627
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.182:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:54:37.604310   47798 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-100627 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:54:37.604383   47798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0919 17:54:37.614235   47798 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:54:37.614296   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:54:37.623423   47798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0919 17:54:37.640384   47798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:54:37.656081   47798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0919 17:54:37.672787   47798 ssh_runner.go:195] Run: grep 192.168.72.182	control-plane.minikube.internal$ /etc/hosts
	I0919 17:54:37.676417   47798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:54:37.687828   47798 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627 for IP: 192.168.72.182
	I0919 17:54:37.687874   47798 certs.go:190] acquiring lock for shared ca certs: {Name:mkcd9535aaeb523a5533c912732bdd9b445557da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:54:37.688058   47798 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key
	I0919 17:54:37.688143   47798 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key
	I0919 17:54:37.688222   47798 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.key
	I0919 17:54:37.688279   47798 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key.3425b032
	I0919 17:54:37.688322   47798 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key
	I0919 17:54:37.688488   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem (1338 bytes)
	W0919 17:54:37.688531   47798 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239_empty.pem, impossibly tiny 0 bytes
	I0919 17:54:37.688546   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:54:37.688579   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem (1082 bytes)
	I0919 17:54:37.688609   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:54:37.688636   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem (1675 bytes)
	I0919 17:54:37.688697   47798 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem (1708 bytes)
	I0919 17:54:37.689406   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:54:37.714671   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:54:37.737884   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:54:37.761839   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:54:37.784692   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:54:37.810865   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 17:54:37.832897   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:54:37.856026   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 17:54:37.879335   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /usr/share/ca-certificates/132392.pem (1708 bytes)
	I0919 17:54:37.902377   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:54:37.924388   47798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/13239.pem --> /usr/share/ca-certificates/13239.pem (1338 bytes)
	I0919 17:54:37.948816   47798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:54:37.965669   47798 ssh_runner.go:195] Run: openssl version
	I0919 17:54:37.971227   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132392.pem && ln -fs /usr/share/ca-certificates/132392.pem /etc/ssl/certs/132392.pem"
	I0919 17:54:37.983269   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988756   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:45 /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.988807   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132392.pem
	I0919 17:54:37.994392   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132392.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:54:38.006098   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:54:38.017868   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022601   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:36 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.022655   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:54:38.028421   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:54:38.039288   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13239.pem && ln -fs /usr/share/ca-certificates/13239.pem /etc/ssl/certs/13239.pem"
	I0919 17:54:38.053131   47798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057881   47798 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:45 /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.057938   47798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13239.pem
	I0919 17:54:38.063816   47798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13239.pem /etc/ssl/certs/51391683.0"
	I0919 17:54:38.074972   47798 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:54:38.080260   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:54:38.085942   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:54:38.091638   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:54:38.097282   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:54:38.103194   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:54:38.109759   47798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:54:38.115202   47798 kubeadm.go:404] StartCluster: {Name:old-k8s-version-100627 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-100627 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[metrics-server:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:54:38.115274   47798 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 17:54:38.115313   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:38.153988   47798 cri.go:89] found id: ""
	I0919 17:54:38.154063   47798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:54:38.164888   47798 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:54:38.164913   47798 kubeadm.go:636] restartCluster start
	I0919 17:54:38.164965   47798 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:54:38.174810   47798 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.175856   47798 kubeconfig.go:92] found "old-k8s-version-100627" server: "https://192.168.72.182:8443"
	I0919 17:54:38.178372   47798 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:54:38.187917   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.187969   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.199654   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.199674   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.199715   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.211155   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:38.712221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:38.712312   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:38.725306   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.211431   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.211494   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.223919   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:39.711400   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:39.711482   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:39.724103   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.211311   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.211379   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.224111   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:40.711529   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:40.711609   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:40.724291   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.212183   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.212285   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.225226   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:41.711742   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:41.711821   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:41.724590   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.212221   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.212289   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.225772   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:42.711304   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:42.711378   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:42.724468   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.211895   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.211978   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.225017   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:43.711734   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:43.711824   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:43.724995   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.211535   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.211616   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.224372   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:44.712113   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:44.712179   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:44.725330   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.211942   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.212027   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.226290   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:45.712216   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:45.712295   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:45.725065   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.212053   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.212150   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.226417   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:46.711997   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:46.712082   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:46.725608   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.212214   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.212300   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.224935   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:47.711452   47798 api_server.go:166] Checking apiserver status ...
	I0919 17:54:47.711540   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:54:47.723970   47798 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:54:48.188749   47798 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:54:48.188785   47798 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:54:48.188800   47798 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 17:54:48.188862   47798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 17:54:48.227729   47798 cri.go:89] found id: ""
	I0919 17:54:48.227789   47798 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:54:48.243618   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:54:48.253221   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:54:48.253285   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262806   47798 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:54:48.262831   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:48.405093   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.114151   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.324152   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.457833   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:49.554530   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:54:49.554595   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:49.568050   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.092864   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:50.592484   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.092979   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:54:51.114757   47798 api_server.go:72] duration metric: took 1.560225697s to wait for apiserver process to appear ...
	I0919 17:54:51.114781   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:54:51.114800   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:56.115914   47798 api_server.go:269] stopped: https://192.168.72.182:8443/healthz: Get "https://192.168.72.182:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 17:54:56.115962   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:57.769883   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:54:57.769915   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:54:58.270598   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.278169   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.278210   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:58.770880   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:58.778649   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:54:58.778679   47798 api_server.go:103] status: https://192.168.72.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:54:59.270233   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 17:54:59.276275   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 17:54:59.283868   47798 api_server.go:141] control plane version: v1.16.0
	I0919 17:54:59.283896   47798 api_server.go:131] duration metric: took 8.169106612s to wait for apiserver health ...
	I0919 17:54:59.283908   47798 cni.go:84] Creating CNI manager for ""
	I0919 17:54:59.283916   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 17:54:59.285960   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:54:59.287537   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:54:59.298142   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:54:59.315861   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:54:59.324878   47798 system_pods.go:59] 8 kube-system pods found
	I0919 17:54:59.324917   47798 system_pods.go:61] "coredns-5644d7b6d9-4mh4f" [382ef590-a6ef-4402-8762-1649f060fbc4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324940   47798 system_pods.go:61] "coredns-5644d7b6d9-wqwp7" [8756ca49-2953-422d-a534-6d1fa5655fbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:54:59.324947   47798 system_pods.go:61] "etcd-old-k8s-version-100627" [1e7bdb28-9c7e-4cae-a87e-ec2fad64e820] Running
	I0919 17:54:59.324955   47798 system_pods.go:61] "kube-apiserver-old-k8s-version-100627" [59a703b6-7c16-48ba-8a78-c1ecd606f138] Running
	I0919 17:54:59.324966   47798 system_pods.go:61] "kube-controller-manager-old-k8s-version-100627" [ac10d741-9a7d-45a1-86f5-a912075b49b9] Running
	I0919 17:54:59.324971   47798 system_pods.go:61] "kube-proxy-j7kqn" [79381ec1-45a7-4424-8383-f97b530979d3] Running
	I0919 17:54:59.324986   47798 system_pods.go:61] "kube-scheduler-old-k8s-version-100627" [40df95ee-b184-48ff-b276-d01c7763c7fc] Running
	I0919 17:54:59.324993   47798 system_pods.go:61] "storage-provisioner" [00e5e0c9-0453-440b-aa5c-e6811f428297] Running
	I0919 17:54:59.325005   47798 system_pods.go:74] duration metric: took 9.119135ms to wait for pod list to return data ...
	I0919 17:54:59.325017   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:54:59.328813   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:54:59.328845   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 17:54:59.328859   47798 node_conditions.go:105] duration metric: took 3.833575ms to run NodePressure ...
	I0919 17:54:59.328879   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:54:59.658953   47798 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:54:59.662655   47798 retry.go:31] will retry after 352.037588ms: kubelet not initialised
	I0919 17:55:00.020425   47798 retry.go:31] will retry after 411.927656ms: kubelet not initialised
	I0919 17:55:00.438027   47798 retry.go:31] will retry after 483.370654ms: kubelet not initialised
	I0919 17:55:00.928598   47798 retry.go:31] will retry after 987.946924ms: kubelet not initialised
	I0919 17:55:01.923328   47798 retry.go:31] will retry after 1.679023275s: kubelet not initialised
	I0919 17:55:03.607494   47798 retry.go:31] will retry after 1.92599571s: kubelet not initialised
	I0919 17:55:05.539070   47798 retry.go:31] will retry after 2.735570072s: kubelet not initialised
	I0919 17:55:08.280198   47798 retry.go:31] will retry after 4.516491636s: kubelet not initialised
	I0919 17:55:12.803629   47798 retry.go:31] will retry after 9.24421999s: kubelet not initialised
	I0919 17:55:22.053509   47798 retry.go:31] will retry after 10.860983763s: kubelet not initialised
	I0919 17:55:32.921288   47798 retry.go:31] will retry after 19.590918142s: kubelet not initialised
	I0919 17:55:52.517612   47798 kubeadm.go:787] kubelet initialised
	I0919 17:55:52.517637   47798 kubeadm.go:788] duration metric: took 52.858662322s waiting for restarted kubelet to initialise ...
	I0919 17:55:52.517644   47798 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:55:52.523992   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530133   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.530151   47798 pod_ready.go:81] duration metric: took 6.127596ms waiting for pod "coredns-5644d7b6d9-4mh4f" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.530160   47798 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535186   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.535202   47798 pod_ready.go:81] duration metric: took 5.035759ms waiting for pod "coredns-5644d7b6d9-wqwp7" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.535209   47798 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540300   47798 pod_ready.go:92] pod "etcd-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.540317   47798 pod_ready.go:81] duration metric: took 5.101572ms waiting for pod "etcd-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.540324   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546670   47798 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.546687   47798 pod_ready.go:81] duration metric: took 6.356984ms waiting for pod "kube-apiserver-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.546696   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916320   47798 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:52.916342   47798 pod_ready.go:81] duration metric: took 369.639886ms waiting for pod "kube-controller-manager-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:52.916353   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316733   47798 pod_ready.go:92] pod "kube-proxy-j7kqn" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.316762   47798 pod_ready.go:81] duration metric: took 400.400609ms waiting for pod "kube-proxy-j7kqn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.316788   47798 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717319   47798 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace has status "Ready":"True"
	I0919 17:55:53.717344   47798 pod_ready.go:81] duration metric: took 400.544097ms waiting for pod "kube-scheduler-old-k8s-version-100627" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:53.717358   47798 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	I0919 17:55:56.023621   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:55:58.025543   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:00.522985   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:02.523350   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:05.022971   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:07.023767   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:09.524598   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:12.024269   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:14.524109   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:16.525347   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:19.025990   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:21.522712   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:23.523098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:25.525823   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:27.526575   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:30.023751   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:32.023914   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:34.523709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:37.025284   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:39.523886   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:42.023525   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:44.023602   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:46.524942   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:49.023162   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:51.025968   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:53.523737   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:55.524950   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:56:58.023648   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:00.024635   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:02.024981   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:04.524374   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:07.024495   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:09.523646   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:12.023778   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:14.024012   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:16.024668   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:18.524581   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:20.525264   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:23.024223   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:25.024271   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:27.024863   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:29.524389   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:31.524867   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:34.026361   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:36.523516   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:38.523641   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:40.525417   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:43.023938   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:45.024235   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:47.025554   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:49.524344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:52.023880   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:54.024324   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:56.024615   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:57:58.523806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:00.524330   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:02.524813   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:05.023667   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:07.024328   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:09.521983   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:11.524126   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:14.033167   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:16.524193   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:19.023478   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:21.023719   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:23.024876   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:25.525000   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:28.022897   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:30.023651   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:32.523506   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:35.023201   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:37.024229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:39.522709   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:41.524752   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:44.022121   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:46.025229   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:48.523728   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:50.524600   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:53.024769   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:55.523745   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:58:58.025806   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:00.524396   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:03.023037   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:05.023335   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:07.024052   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:09.024205   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:11.523020   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:13.524065   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:16.025098   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:18.523293   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:20.525391   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:23.025049   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:25.522619   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:27.525208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:30.024344   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:32.024984   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:34.523267   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:36.524365   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:39.023558   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:41.523208   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:43.524139   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:46.023918   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:48.523431   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:50.523998   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.024150   47798 pod_ready.go:102] pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace has status "Ready":"False"
	I0919 17:59:53.718434   47798 pod_ready.go:81] duration metric: took 4m0.001059167s waiting for pod "metrics-server-74d5856cc6-rncgn" in "kube-system" namespace to be "Ready" ...
	E0919 17:59:53.718466   47798 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0919 17:59:53.718484   47798 pod_ready.go:38] duration metric: took 4m1.200831266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:59:53.718520   47798 kubeadm.go:640] restartCluster took 5m15.553599416s
	W0919 17:59:53.718575   47798 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0919 17:59:53.718604   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 17:59:58.500835   47798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.782205666s)
	I0919 17:59:58.500900   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:59:58.514207   47798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:59:58.524054   47798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:59:58.532896   47798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:59:58.532945   47798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0919 17:59:58.588089   47798 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0919 17:59:58.588197   47798 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:59:58.739994   47798 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:59:58.740116   47798 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:59:58.740291   47798 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:59:58.968628   47798 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:59:58.968805   47798 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:59:58.977284   47798 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0919 17:59:59.111196   47798 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:59:59.113466   47798 out.go:204]   - Generating certificates and keys ...
	I0919 17:59:59.113599   47798 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:59:59.113711   47798 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:59:59.113854   47798 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0919 17:59:59.113938   47798 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0919 17:59:59.114070   47798 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0919 17:59:59.114144   47798 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0919 17:59:59.114911   47798 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0919 17:59:59.115382   47798 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0919 17:59:59.115986   47798 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0919 17:59:59.116548   47798 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0919 17:59:59.116630   47798 kubeadm.go:322] [certs] Using the existing "sa" key
	I0919 17:59:59.116713   47798 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:59:59.334495   47798 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:59:59.627886   47798 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:59:59.967368   47798 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:00:00.114260   47798 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:00:00.115507   47798 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:00:00.117811   47798 out.go:204]   - Booting up control plane ...
	I0919 18:00:00.117935   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:00:00.122651   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:00:00.125112   47798 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:00:00.126687   47798 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:00:00.129807   47798 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 18:00:11.635043   47798 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504905 seconds
	I0919 18:00:11.635206   47798 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:00:11.654058   47798 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:00:12.194702   47798 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:00:12.194899   47798 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-100627 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0919 18:00:12.704504   47798 kubeadm.go:322] [bootstrap-token] Using token: exrkug.z0q4aqb4emd0lkvm
	I0919 18:00:12.706136   47798 out.go:204]   - Configuring RBAC rules ...
	I0919 18:00:12.706241   47798 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:00:12.721292   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:00:12.729553   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:00:12.735434   47798 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:00:12.739232   47798 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:00:12.816288   47798 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 18:00:13.140789   47798 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 18:00:13.142170   47798 kubeadm.go:322] 
	I0919 18:00:13.142257   47798 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 18:00:13.142268   47798 kubeadm.go:322] 
	I0919 18:00:13.142338   47798 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 18:00:13.142348   47798 kubeadm.go:322] 
	I0919 18:00:13.142382   47798 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 18:00:13.142468   47798 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:00:13.142554   47798 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:00:13.142571   47798 kubeadm.go:322] 
	I0919 18:00:13.142642   47798 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 18:00:13.142734   47798 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:00:13.142826   47798 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:00:13.142841   47798 kubeadm.go:322] 
	I0919 18:00:13.142952   47798 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0919 18:00:13.143062   47798 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 18:00:13.143073   47798 kubeadm.go:322] 
	I0919 18:00:13.143177   47798 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143336   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 18:00:13.143374   47798 kubeadm.go:322]     --control-plane 	  
	I0919 18:00:13.143387   47798 kubeadm.go:322] 
	I0919 18:00:13.143501   47798 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:00:13.143511   47798 kubeadm.go:322] 
	I0919 18:00:13.143613   47798 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token exrkug.z0q4aqb4emd0lkvm \
	I0919 18:00:13.143744   47798 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 18:00:13.144341   47798 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:00:13.144373   47798 cni.go:84] Creating CNI manager for ""
	I0919 18:00:13.144392   47798 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:00:13.146075   47798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:00:13.148011   47798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:00:13.159265   47798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 18:00:13.178271   47798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:00:13.178388   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.178420   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=old-k8s-version-100627 minikube.k8s.io/updated_at=2023_09_19T18_00_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.212392   47798 ops.go:34] apiserver oom_adj: -16
	I0919 18:00:13.509743   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:13.611752   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.210418   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:14.710689   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.210316   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:15.710515   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.210852   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:16.710451   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.210179   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:17.710559   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.210390   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:18.710683   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.210573   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:19.710581   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.210732   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:20.710461   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.210702   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:21.709813   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.209903   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:22.709847   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.210276   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:23.710692   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.210645   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:24.710835   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.209793   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:25.710473   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.209945   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:26.710136   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.210552   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:27.710679   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.209990   47798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:00:28.365531   47798 kubeadm.go:1081] duration metric: took 15.187210441s to wait for elevateKubeSystemPrivileges.
	I0919 18:00:28.365564   47798 kubeadm.go:406] StartCluster complete in 5m50.250366407s
	I0919 18:00:28.365586   47798 settings.go:142] acquiring lock: {Name:mk4dea221d84ac7f39978be61fc7eb70647b5155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.365675   47798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:00:28.368279   47798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/kubeconfig: {Name:mk436083b52a902b659e099b9770a07ee1ea129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:00:28.368566   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:00:28.368696   47798 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 18:00:28.368769   47798 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368797   47798 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-100627"
	I0919 18:00:28.368803   47798 config.go:182] Loaded profile config "old-k8s-version-100627": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0919 18:00:28.368850   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368863   47798 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368878   47798 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-100627"
	W0919 18:00:28.368886   47798 addons.go:240] addon metrics-server should already be in state true
	I0919 18:00:28.368922   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.368851   47798 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-100627"
	I0919 18:00:28.368982   47798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-100627"
	I0919 18:00:28.369268   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369273   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369292   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369294   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.369392   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.369412   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.389023   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0919 18:00:28.389631   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.389718   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35909
	I0919 18:00:28.390023   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390257   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36187
	I0919 18:00:28.390523   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390547   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390646   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.390676   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.390895   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391311   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391391   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.391418   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.391709   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.391712   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391748   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391757   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.391791   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.391838   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.410811   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0919 18:00:28.410846   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34885
	I0919 18:00:28.411329   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411366   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.411777   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411796   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.411888   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.411905   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.412177   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412219   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.412326   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.412402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.414149   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.417333   47798 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 18:00:28.414621   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.419038   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:00:28.419051   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:00:28.419071   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.420833   47798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:00:28.422332   47798 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.422358   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:00:28.422378   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.422103   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.422902   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.422992   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.423016   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.423112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.423305   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.423474   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.425328   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425845   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.425869   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.425895   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.426078   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.426219   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.426322   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.464699   47798 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-100627"
	I0919 18:00:28.464737   47798 host.go:66] Checking if "old-k8s-version-100627" exists ...
	I0919 18:00:28.465028   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.465059   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.479442   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0919 18:00:28.479839   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.480266   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.480294   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.480676   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.481211   47798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:00:28.481248   47798 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:00:28.495810   47798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0919 18:00:28.496299   47798 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:00:28.496709   47798 main.go:141] libmachine: Using API Version  1
	I0919 18:00:28.496740   47798 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:00:28.497099   47798 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:00:28.497375   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetState
	I0919 18:00:28.499150   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .DriverName
	I0919 18:00:28.499406   47798 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.499420   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:00:28.499434   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHHostname
	I0919 18:00:28.502227   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502622   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:1d:e7", ip: ""} in network mk-old-k8s-version-100627: {Iface:virbr1 ExpiryTime:2023-09-19 18:54:21 +0000 UTC Type:0 Mac:52:54:00:ee:1d:e7 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-100627 Clientid:01:52:54:00:ee:1d:e7}
	I0919 18:00:28.502653   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | domain old-k8s-version-100627 has defined IP address 192.168.72.182 and MAC address 52:54:00:ee:1d:e7 in network mk-old-k8s-version-100627
	I0919 18:00:28.502792   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHPort
	I0919 18:00:28.502961   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHKeyPath
	I0919 18:00:28.503112   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .GetSSHUsername
	I0919 18:00:28.503256   47798 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/old-k8s-version-100627/id_rsa Username:docker}
	I0919 18:00:28.738306   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:00:28.738334   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 18:00:28.739481   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:00:28.753537   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:00:28.807289   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:00:28.807321   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:00:28.904080   47798 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:28.904107   47798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:00:28.991114   47798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:00:29.327327   47798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:00:29.371292   47798 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-100627" context rescaled to 1 replicas
	I0919 18:00:29.371337   47798 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:00:29.373222   47798 out.go:177] * Verifying Kubernetes components...
	I0919 18:00:29.374912   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:00:30.105746   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.366227457s)
	I0919 18:00:30.105776   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.352204878s)
	I0919 18:00:30.105793   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105805   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.105814   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.105827   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106180   47798 main.go:141] libmachine: (old-k8s-version-100627) DBG | Closing plugin on server side
	I0919 18:00:30.106222   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106236   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106246   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106259   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106357   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106373   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106396   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106408   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106486   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106500   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106513   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.106522   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.106592   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106602   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.106826   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.106842   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.185977   47798 start.go:917] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0919 18:00:30.185980   47798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.194821805s)
	I0919 18:00:30.186035   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186031   47798 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.186049   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186367   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186383   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186393   47798 main.go:141] libmachine: Making call to close driver server
	I0919 18:00:30.186402   47798 main.go:141] libmachine: (old-k8s-version-100627) Calling .Close
	I0919 18:00:30.186647   47798 main.go:141] libmachine: Successfully made call to close driver server
	I0919 18:00:30.186671   47798 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 18:00:30.186681   47798 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-100627"
	I0919 18:00:30.188971   47798 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0919 18:00:30.190949   47798 addons.go:502] enable addons completed in 1.822257993s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0919 18:00:30.236503   47798 node_ready.go:49] node "old-k8s-version-100627" has status "Ready":"True"
	I0919 18:00:30.236526   47798 node_ready.go:38] duration metric: took 50.473068ms waiting for node "old-k8s-version-100627" to be "Ready" ...
	I0919 18:00:30.236538   47798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:30.243959   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:32.262563   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:34.263997   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:36.762957   47798 pod_ready.go:102] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"False"
	I0919 18:00:37.763670   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.763694   47798 pod_ready.go:81] duration metric: took 7.519708991s waiting for pod "coredns-5644d7b6d9-dxjbg" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.763704   47798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769351   47798 pod_ready.go:92] pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.769371   47798 pod_ready.go:81] duration metric: took 5.660975ms waiting for pod "coredns-5644d7b6d9-xw6fj" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.769382   47798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773846   47798 pod_ready.go:92] pod "kube-proxy-x7p9v" in "kube-system" namespace has status "Ready":"True"
	I0919 18:00:37.773866   47798 pod_ready.go:81] duration metric: took 4.476479ms waiting for pod "kube-proxy-x7p9v" in "kube-system" namespace to be "Ready" ...
	I0919 18:00:37.773879   47798 pod_ready.go:38] duration metric: took 7.537327576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:00:37.773896   47798 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:00:37.773947   47798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:00:37.789245   47798 api_server.go:72] duration metric: took 8.417877969s to wait for apiserver process to appear ...
	I0919 18:00:37.789267   47798 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:00:37.789283   47798 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I0919 18:00:37.796929   47798 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I0919 18:00:37.798217   47798 api_server.go:141] control plane version: v1.16.0
	I0919 18:00:37.798233   47798 api_server.go:131] duration metric: took 8.960108ms to wait for apiserver health ...
	I0919 18:00:37.798240   47798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:00:37.802732   47798 system_pods.go:59] 5 kube-system pods found
	I0919 18:00:37.802751   47798 system_pods.go:61] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.802755   47798 system_pods.go:61] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.802759   47798 system_pods.go:61] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.802765   47798 system_pods.go:61] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.802771   47798 system_pods.go:61] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.802775   47798 system_pods.go:74] duration metric: took 4.531294ms to wait for pod list to return data ...
	I0919 18:00:37.802781   47798 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:00:37.805090   47798 default_sa.go:45] found service account: "default"
	I0919 18:00:37.805108   47798 default_sa.go:55] duration metric: took 2.323003ms for default service account to be created ...
	I0919 18:00:37.805115   47798 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:00:37.809387   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:37.809412   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:37.809421   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:37.809428   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:37.809437   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:37.809445   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:37.809492   47798 retry.go:31] will retry after 308.50392ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.123229   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.123251   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.123256   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.123262   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.123271   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.123277   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.123291   47798 retry.go:31] will retry after 322.697394ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.452201   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.452227   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.452232   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.452236   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.452242   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.452248   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.452263   47798 retry.go:31] will retry after 457.851598ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:38.916270   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:38.916309   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:38.916318   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:38.916325   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:38.916336   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:38.916345   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:38.916367   47798 retry.go:31] will retry after 438.479707ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:39.360169   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:39.360194   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:39.360199   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:39.360203   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:39.360210   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:39.360214   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:39.360228   47798 retry.go:31] will retry after 636.764599ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.002876   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.002902   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.002907   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.002911   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.002918   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.002922   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.002936   47798 retry.go:31] will retry after 763.456742ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:40.771715   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:40.771743   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:40.771751   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:40.771758   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:40.771768   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:40.771777   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:40.771794   47798 retry.go:31] will retry after 849.595493ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:41.628988   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:41.629014   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:41.629019   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:41.629024   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:41.629030   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:41.629035   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:41.629048   47798 retry.go:31] will retry after 1.130396523s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:42.765798   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:42.765825   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:42.765830   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:42.765834   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:42.765841   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:42.765846   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:42.765861   47798 retry.go:31] will retry after 1.444918771s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:44.216701   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:44.216726   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:44.216731   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:44.216735   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:44.216743   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:44.216751   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:44.216769   47798 retry.go:31] will retry after 2.010339666s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:46.233732   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:46.233764   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:46.233772   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:46.233779   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:46.233789   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:46.233798   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:46.233817   47798 retry.go:31] will retry after 2.386355588s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:48.625414   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:48.625451   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:48.625458   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:48.625463   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:48.625469   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:48.625478   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:48.625496   47798 retry.go:31] will retry after 3.40684833s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:52.037490   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:52.037516   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:52.037522   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:52.037526   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:52.037532   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:52.037538   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:52.037553   47798 retry.go:31] will retry after 4.080274795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:00:56.123283   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:00:56.123307   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:00:56.123312   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:00:56.123316   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:00:56.123322   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:00:56.123327   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:00:56.123341   47798 retry.go:31] will retry after 4.076928493s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:00.205817   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:00.205842   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:00.205848   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:00.205851   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:00.205860   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:00.205865   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:00.205880   47798 retry.go:31] will retry after 6.340158574s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:06.551794   47798 system_pods.go:86] 5 kube-system pods found
	I0919 18:01:06.551821   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:06.551829   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:06.551835   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:06.551844   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:06.551852   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:06.551870   47798 retry.go:31] will retry after 8.178931758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:14.737898   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:14.737926   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:14.737934   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:14.737941   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:14.737947   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Pending
	I0919 18:01:14.737955   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:14.737961   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Pending
	I0919 18:01:14.737969   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:14.737977   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:14.737996   47798 retry.go:31] will retry after 7.690456991s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I0919 18:01:22.435672   47798 system_pods.go:86] 8 kube-system pods found
	I0919 18:01:22.435706   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:22.435714   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:22.435721   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:22.435728   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Running
	I0919 18:01:22.435736   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:22.435744   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Running
	I0919 18:01:22.435755   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:22.435765   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:22.435782   47798 retry.go:31] will retry after 8.810480707s: missing components: kube-apiserver
	I0919 18:01:31.254171   47798 system_pods.go:86] 9 kube-system pods found
	I0919 18:01:31.254216   47798 system_pods.go:89] "coredns-5644d7b6d9-dxjbg" [a6fd4de3-ef93-4c7a-8a9a-849e95f33f57] Running
	I0919 18:01:31.254223   47798 system_pods.go:89] "coredns-5644d7b6d9-xw6fj" [6248b7d9-ea82-4e3a-9c11-71eb27e98b79] Running
	I0919 18:01:31.254228   47798 system_pods.go:89] "etcd-old-k8s-version-100627" [81fb426d-cabc-4a93-ac8b-c269012183dd] Running
	I0919 18:01:31.254233   47798 system_pods.go:89] "kube-apiserver-old-k8s-version-100627" [477571a2-c091-4d30-9c70-389556fade77] Running
	I0919 18:01:31.254240   47798 system_pods.go:89] "kube-controller-manager-old-k8s-version-100627" [67003043-5264-49df-99ae-3e1dfa91743e] Running
	I0919 18:01:31.254246   47798 system_pods.go:89] "kube-proxy-x7p9v" [c0c6eedb-07eb-447c-9911-67439b067046] Running
	I0919 18:01:31.254252   47798 system_pods.go:89] "kube-scheduler-old-k8s-version-100627" [cc434bb4-2c3d-4dc5-b921-7516adf4cfb8] Running
	I0919 18:01:31.254263   47798 system_pods.go:89] "metrics-server-74d5856cc6-6gls6" [d04324e4-19d0-4e5b-ad86-2e290757cc2b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 18:01:31.254278   47798 system_pods.go:89] "storage-provisioner" [1ecaa3fa-626f-4dc4-927a-e1057c683c58] Running
	I0919 18:01:31.254287   47798 system_pods.go:126] duration metric: took 53.449167375s to wait for k8s-apps to be running ...
	I0919 18:01:31.254295   47798 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:01:31.254346   47798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:01:31.270302   47798 system_svc.go:56] duration metric: took 16.000049ms WaitForService to wait for kubelet.
	I0919 18:01:31.270329   47798 kubeadm.go:581] duration metric: took 1m1.898967343s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 18:01:31.270356   47798 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:01:31.273300   47798 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 18:01:31.273324   47798 node_conditions.go:123] node cpu capacity is 2
	I0919 18:01:31.273334   47798 node_conditions.go:105] duration metric: took 2.973337ms to run NodePressure ...
	I0919 18:01:31.273344   47798 start.go:228] waiting for startup goroutines ...
	I0919 18:01:31.273349   47798 start.go:233] waiting for cluster config update ...
	I0919 18:01:31.273358   47798 start.go:242] writing updated cluster config ...
	I0919 18:01:31.273601   47798 ssh_runner.go:195] Run: rm -f paused
	I0919 18:01:31.321319   47798 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0919 18:01:31.323360   47798 out.go:177] 
	W0919 18:01:31.324777   47798 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0919 18:01:31.326209   47798 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0919 18:01:31.327585   47798 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-100627" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:54:20 UTC, ends at Tue 2023-09-19 18:06:58 UTC. --
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.231789643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146818231778862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b369419c-78a8-4154-b6ea-4941ef83bcdc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.232696851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c3b3850e-383b-4a26-a4f8-593a77d08cd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.232749571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c3b3850e-383b-4a26-a4f8-593a77d08cd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.232966715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be9296e5b79f28c42d63b18c8ff41a58b2c38d5da0538d7a0a3aea66773b63e9,PodSandboxId:1a153f3f3c3a51099dce0bb83efe4d039ef2fcffe902b079099f55086d0eeeb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695146431068766042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaa3fa-626f-4dc4-927a-e1057c683c58,},Annotations:map[string]string{io.kubernetes.container.hash: 26347831,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebe920dc37c5c580b7ccbad283a25add50ea0cd388032186553195a7ad484c1,PodSandboxId:22a147069fa92f8e8be257ea5820626eed4d85931590c9e51c77bc643852e581,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695146430226028040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7p9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c6eedb-07eb-447c-9911-67439b067046,},Annotations:map[string]string{io.kubernetes.container.hash: dda7ccc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb9068f90e2c7e6ef1a29f677f8f0623cf8f1c4b12093c0b61b3776d9c5c21d,PodSandboxId:bacaf87764b5ba2654a2f95e6883091a34b75a2cef1d6e5dbb97751f07df8fea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695146428949779940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-xw6fj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6248b7d9-ea82-4e3a-9c11-71eb27e98b79,},Annotations:map[string]string{io.kubernetes.container.hash: daf0b989,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc48453b3544261357e7987a4a27db380506b86eccb79b4be2a845b6febb3502,PodSandboxId:590ae01c17f2c4b22bc03df67fe06eec6bd87c6c7609df14fbf51d983cc42dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695146403564782403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6435a5624e04a25a18446ae337a50dee10c8d226cbebc5b96baa9f9d376d4569,PodSandboxId:c1443db5bbca0fcb3d44f3da773bfdab5627c887ebb85f2e45da1702371dbc44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695146401944467229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d7c8f5ed80a7d16a1a3fd2b0d60d26268c6af866d2106e4dcf2d6cf02c06ae,PodSandboxId:868f16ee086ecc354662ab06ce66e6330c71f9df889a7cf66bcd08d01e7c90a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695146401684966175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubern
etes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f071f72179670df22e3147adaef16fac9311843311ee00d5b5f27e42a2cf8bdb,PodSandboxId:f77dccc9eebfdac7644c34fd9351bb63a6f9fde1360ca0c7ee9d320cb718e5d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695146401547626401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c3b3850e-383b-4a26-a4f8-593a77d08cd2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.272668496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5c511912-08cd-4aea-afee-4e9c36b94714 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.272731042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5c511912-08cd-4aea-afee-4e9c36b94714 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.274167746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7dd9e53b-c65c-43e3-a238-f43d4aad5de0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.274530136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146818274518184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=7dd9e53b-c65c-43e3-a238-f43d4aad5de0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.275130553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c106cac-bbf3-4815-82f8-fd6d743aad44 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.275179565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c106cac-bbf3-4815-82f8-fd6d743aad44 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.275338522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be9296e5b79f28c42d63b18c8ff41a58b2c38d5da0538d7a0a3aea66773b63e9,PodSandboxId:1a153f3f3c3a51099dce0bb83efe4d039ef2fcffe902b079099f55086d0eeeb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695146431068766042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaa3fa-626f-4dc4-927a-e1057c683c58,},Annotations:map[string]string{io.kubernetes.container.hash: 26347831,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebe920dc37c5c580b7ccbad283a25add50ea0cd388032186553195a7ad484c1,PodSandboxId:22a147069fa92f8e8be257ea5820626eed4d85931590c9e51c77bc643852e581,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695146430226028040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7p9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c6eedb-07eb-447c-9911-67439b067046,},Annotations:map[string]string{io.kubernetes.container.hash: dda7ccc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb9068f90e2c7e6ef1a29f677f8f0623cf8f1c4b12093c0b61b3776d9c5c21d,PodSandboxId:bacaf87764b5ba2654a2f95e6883091a34b75a2cef1d6e5dbb97751f07df8fea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695146428949779940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-xw6fj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6248b7d9-ea82-4e3a-9c11-71eb27e98b79,},Annotations:map[string]string{io.kubernetes.container.hash: daf0b989,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc48453b3544261357e7987a4a27db380506b86eccb79b4be2a845b6febb3502,PodSandboxId:590ae01c17f2c4b22bc03df67fe06eec6bd87c6c7609df14fbf51d983cc42dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695146403564782403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6435a5624e04a25a18446ae337a50dee10c8d226cbebc5b96baa9f9d376d4569,PodSandboxId:c1443db5bbca0fcb3d44f3da773bfdab5627c887ebb85f2e45da1702371dbc44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695146401944467229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d7c8f5ed80a7d16a1a3fd2b0d60d26268c6af866d2106e4dcf2d6cf02c06ae,PodSandboxId:868f16ee086ecc354662ab06ce66e6330c71f9df889a7cf66bcd08d01e7c90a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695146401684966175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubern
etes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f071f72179670df22e3147adaef16fac9311843311ee00d5b5f27e42a2cf8bdb,PodSandboxId:f77dccc9eebfdac7644c34fd9351bb63a6f9fde1360ca0c7ee9d320cb718e5d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695146401547626401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c106cac-bbf3-4815-82f8-fd6d743aad44 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.313273629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c67f18d3-251c-40b6-aaf3-7c086651cdf6 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.313327927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c67f18d3-251c-40b6-aaf3-7c086651cdf6 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.314387181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9fb36c24-2e42-4034-a8f2-95fed9892a52 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.314793283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146818314776061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=9fb36c24-2e42-4034-a8f2-95fed9892a52 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.315589036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e47146ee-e1d8-4d74-ac3d-96e116d37de1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.315631463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e47146ee-e1d8-4d74-ac3d-96e116d37de1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.315784367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be9296e5b79f28c42d63b18c8ff41a58b2c38d5da0538d7a0a3aea66773b63e9,PodSandboxId:1a153f3f3c3a51099dce0bb83efe4d039ef2fcffe902b079099f55086d0eeeb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695146431068766042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaa3fa-626f-4dc4-927a-e1057c683c58,},Annotations:map[string]string{io.kubernetes.container.hash: 26347831,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebe920dc37c5c580b7ccbad283a25add50ea0cd388032186553195a7ad484c1,PodSandboxId:22a147069fa92f8e8be257ea5820626eed4d85931590c9e51c77bc643852e581,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695146430226028040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7p9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c6eedb-07eb-447c-9911-67439b067046,},Annotations:map[string]string{io.kubernetes.container.hash: dda7ccc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb9068f90e2c7e6ef1a29f677f8f0623cf8f1c4b12093c0b61b3776d9c5c21d,PodSandboxId:bacaf87764b5ba2654a2f95e6883091a34b75a2cef1d6e5dbb97751f07df8fea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695146428949779940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-xw6fj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6248b7d9-ea82-4e3a-9c11-71eb27e98b79,},Annotations:map[string]string{io.kubernetes.container.hash: daf0b989,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc48453b3544261357e7987a4a27db380506b86eccb79b4be2a845b6febb3502,PodSandboxId:590ae01c17f2c4b22bc03df67fe06eec6bd87c6c7609df14fbf51d983cc42dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695146403564782403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6435a5624e04a25a18446ae337a50dee10c8d226cbebc5b96baa9f9d376d4569,PodSandboxId:c1443db5bbca0fcb3d44f3da773bfdab5627c887ebb85f2e45da1702371dbc44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695146401944467229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d7c8f5ed80a7d16a1a3fd2b0d60d26268c6af866d2106e4dcf2d6cf02c06ae,PodSandboxId:868f16ee086ecc354662ab06ce66e6330c71f9df889a7cf66bcd08d01e7c90a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695146401684966175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubern
etes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f071f72179670df22e3147adaef16fac9311843311ee00d5b5f27e42a2cf8bdb,PodSandboxId:f77dccc9eebfdac7644c34fd9351bb63a6f9fde1360ca0c7ee9d320cb718e5d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695146401547626401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e47146ee-e1d8-4d74-ac3d-96e116d37de1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.351799008Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9c5be14b-a934-4212-a84e-68389a923c0d name=/runtime.v1.RuntimeService/Version
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.351961373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9c5be14b-a934-4212-a84e-68389a923c0d name=/runtime.v1.RuntimeService/Version
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.353292812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fe1133b5-cd4b-42f6-a77f-d5cc02cfe092 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.353658182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146818353646679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=fe1133b5-cd4b-42f6-a77f-d5cc02cfe092 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.354394269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ff2a1e45-5483-4ff9-8165-ef3a2c3091da name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.354444249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ff2a1e45-5483-4ff9-8165-ef3a2c3091da name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:06:58 old-k8s-version-100627 crio[732]: time="2023-09-19 18:06:58.354587906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:be9296e5b79f28c42d63b18c8ff41a58b2c38d5da0538d7a0a3aea66773b63e9,PodSandboxId:1a153f3f3c3a51099dce0bb83efe4d039ef2fcffe902b079099f55086d0eeeb3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695146431068766042,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ecaa3fa-626f-4dc4-927a-e1057c683c58,},Annotations:map[string]string{io.kubernetes.container.hash: 26347831,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ebe920dc37c5c580b7ccbad283a25add50ea0cd388032186553195a7ad484c1,PodSandboxId:22a147069fa92f8e8be257ea5820626eed4d85931590c9e51c77bc643852e581,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1695146430226028040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x7p9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0c6eedb-07eb-447c-9911-67439b067046,},Annotations:map[string]string{io.kubernetes.container.hash: dda7ccc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb9068f90e2c7e6ef1a29f677f8f0623cf8f1c4b12093c0b61b3776d9c5c21d,PodSandboxId:bacaf87764b5ba2654a2f95e6883091a34b75a2cef1d6e5dbb97751f07df8fea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1695146428949779940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-xw6fj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6248b7d9-ea82-4e3a-9c11-71eb27e98b79,},Annotations:map[string]string{io.kubernetes.container.hash: daf0b989,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc48453b3544261357e7987a4a27db380506b86eccb79b4be2a845b6febb3502,PodSandboxId:590ae01c17f2c4b22bc03df67fe06eec6bd87c6c7609df14fbf51d983cc42dc2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1695146403564782403,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b1a8584b7a2f3994535e7ec284367ee,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 84c4f7f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6435a5624e04a25a18446ae337a50dee10c8d226cbebc5b96baa9f9d376d4569,PodSandboxId:c1443db5bbca0fcb3d44f3da773bfdab5627c887ebb85f2e45da1702371dbc44,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1695146401944467229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d7c8f5ed80a7d16a1a3fd2b0d60d26268c6af866d2106e4dcf2d6cf02c06ae,PodSandboxId:868f16ee086ecc354662ab06ce66e6330c71f9df889a7cf66bcd08d01e7c90a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1695146401684966175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:map[string]string{io.kubern
etes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f071f72179670df22e3147adaef16fac9311843311ee00d5b5f27e42a2cf8bdb,PodSandboxId:f77dccc9eebfdac7644c34fd9351bb63a6f9fde1360ca0c7ee9d320cb718e5d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1695146401547626401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-100627,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ff2a1e45-5483-4ff9-8165-ef3a2c3091da name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be9296e5b79f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       0                   1a153f3f3c3a5       storage-provisioner
	1ebe920dc37c5       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   6 minutes ago       Running             kube-proxy                0                   22a147069fa92       kube-proxy-x7p9v
	3cb9068f90e2c       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   6 minutes ago       Running             coredns                   0                   bacaf87764b5b       coredns-5644d7b6d9-xw6fj
	dc48453b35442       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   6 minutes ago       Running             etcd                      0                   590ae01c17f2c       etcd-old-k8s-version-100627
	6435a5624e04a       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   6 minutes ago       Running             kube-scheduler            0                   c1443db5bbca0       kube-scheduler-old-k8s-version-100627
	47d7c8f5ed80a       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   6 minutes ago       Running             kube-apiserver            0                   868f16ee086ec       kube-apiserver-old-k8s-version-100627
	f071f72179670       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   6 minutes ago       Running             kube-controller-manager   0                   f77dccc9eebfd       kube-controller-manager-old-k8s-version-100627
	
	* 
	* ==> coredns [3cb9068f90e2c7e6ef1a29f677f8f0623cf8f1c4b12093c0b61b3776d9c5c21d] <==
	* .:53
	2023-09-19T18:00:29.617Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-09-19T18:00:29.617Z [INFO] CoreDNS-1.6.2
	2023-09-19T18:00:29.617Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-09-19T18:00:55.072Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	2023-09-19T18:00:55.082Z [INFO] 127.0.0.1:43891 - 31523 "HINFO IN 6814783655924654923.8657364638924959571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010874987s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-100627
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-100627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=old-k8s-version-100627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T18_00_13_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 18:00:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 18:06:08 +0000   Tue, 19 Sep 2023 18:00:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 18:06:08 +0000   Tue, 19 Sep 2023 18:00:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 18:06:08 +0000   Tue, 19 Sep 2023 18:00:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 18:06:08 +0000   Tue, 19 Sep 2023 18:00:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.182
	  Hostname:    old-k8s-version-100627
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 60319d8735584b7ab7f14de7c68a7260
	 System UUID:                60319d87-3558-4b7a-b7f1-4de7c68a7260
	 Boot ID:                    4308a796-3eab-4648-a999-943289e94536
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-xw6fj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m31s
	  kube-system                etcd-old-k8s-version-100627                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                kube-apiserver-old-k8s-version-100627             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                kube-controller-manager-old-k8s-version-100627    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                kube-proxy-x7p9v                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                kube-scheduler-old-k8s-version-100627             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                metrics-server-74d5856cc6-6gls6                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         6m27s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  6m58s (x8 over 6m58s)  kubelet, old-k8s-version-100627     Node old-k8s-version-100627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m58s (x7 over 6m58s)  kubelet, old-k8s-version-100627     Node old-k8s-version-100627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m58s (x8 over 6m58s)  kubelet, old-k8s-version-100627     Node old-k8s-version-100627 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m28s                  kube-proxy, old-k8s-version-100627  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep19 17:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.474724] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.418651] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.151992] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.635716] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.988106] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.108395] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.161819] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.114749] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[  +0.239400] systemd-fstab-generator[717]: Ignoring "noauto" for root device
	[ +19.943984] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.445856] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep19 17:55] kauditd_printk_skb: 18 callbacks suppressed
	[Sep19 17:59] systemd-fstab-generator[3186]: Ignoring "noauto" for root device
	[Sep19 18:00] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.118691] kauditd_printk_skb: 7 callbacks suppressed
	[Sep19 18:02] hrtimer: interrupt took 2936171 ns
	
	* 
	* ==> etcd [dc48453b3544261357e7987a4a27db380506b86eccb79b4be2a845b6febb3502] <==
	* 2023-09-19 18:00:03.689800 I | raft: ff4c26660998c2c8 became follower at term 1
	2023-09-19 18:00:03.698969 W | auth: simple token is not cryptographically signed
	2023-09-19 18:00:03.704422 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-19 18:00:03.706359 I | etcdserver: ff4c26660998c2c8 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-19 18:00:03.706912 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-19 18:00:03.707076 I | embed: listening for metrics on http://192.168.72.182:2381
	2023-09-19 18:00:03.707254 I | etcdserver/membership: added member ff4c26660998c2c8 [https://192.168.72.182:2380] to cluster 1c15affd5c0f3dba
	2023-09-19 18:00:03.707376 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-19 18:00:04.390393 I | raft: ff4c26660998c2c8 is starting a new election at term 1
	2023-09-19 18:00:04.390452 I | raft: ff4c26660998c2c8 became candidate at term 2
	2023-09-19 18:00:04.390464 I | raft: ff4c26660998c2c8 received MsgVoteResp from ff4c26660998c2c8 at term 2
	2023-09-19 18:00:04.390474 I | raft: ff4c26660998c2c8 became leader at term 2
	2023-09-19 18:00:04.390479 I | raft: raft.node: ff4c26660998c2c8 elected leader ff4c26660998c2c8 at term 2
	2023-09-19 18:00:04.391018 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-19 18:00:04.392303 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-19 18:00:04.392446 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-19 18:00:04.392481 I | etcdserver: published {Name:old-k8s-version-100627 ClientURLs:[https://192.168.72.182:2379]} to cluster 1c15affd5c0f3dba
	2023-09-19 18:00:04.392499 I | embed: ready to serve client requests
	2023-09-19 18:00:04.392702 I | embed: ready to serve client requests
	2023-09-19 18:00:04.394073 I | embed: serving client requests on 192.168.72.182:2379
	2023-09-19 18:00:04.396162 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-19 18:00:28.955322 W | etcdserver: request "header:<ID:14035620721202785636 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-5644d7b6d9\" mod_revision:334 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-5644d7b6d9\" value_size:1208 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-5644d7b6d9\" > >>" with result "size:16" took too long (379.892527ms) to execute
	2023-09-19 18:00:28.956406 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:1269" took too long (419.138333ms) to execute
	2023-09-19 18:00:29.275607 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-dxjbg\" " with result "range_response_count:1 size:1695" took too long (321.294907ms) to execute
	2023-09-19 18:00:29.710539 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (171.400619ms) to execute
	
	* 
	* ==> kernel <==
	*  18:06:58 up 12 min,  0 users,  load average: 0.00, 0.23, 0.24
	Linux old-k8s-version-100627 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [47d7c8f5ed80a7d16a1a3fd2b0d60d26268c6af866d2106e4dcf2d6cf02c06ae] <==
	* I0919 18:00:31.780486       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0919 18:00:31.780657       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:00:31.780744       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:00:31.780921       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:01:31.781273       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0919 18:01:31.781376       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:01:31.781410       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:01:31.781417       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:03:31.781987       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0919 18:03:31.782124       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:03:31.782214       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:03:31.782225       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:05:09.248488       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0919 18:05:09.248621       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:05:09.248682       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:05:09.248689       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:06:09.249182       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0919 18:06:09.249311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:06:09.249358       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:06:09.249365       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f071f72179670df22e3147adaef16fac9311843311ee00d5b5f27e42a2cf8bdb] <==
	* I0919 18:00:31.053040       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"a07503a5-0b8c-49bc-874e-d50abf1f9d87", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-6gls6
	E0919 18:00:59.300711       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:01:00.150474       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:01:29.553151       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:01:32.152590       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:01:59.805048       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:02:04.154439       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:02:30.057115       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:02:36.156334       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:03:00.309444       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:03:08.158459       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:03:30.561734       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:03:40.160536       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:04:00.814649       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:04:12.162951       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:04:31.067186       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:04:44.165357       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:05:01.319343       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:05:16.167534       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:05:31.571310       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:05:48.170570       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:06:01.823149       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:06:20.172934       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0919 18:06:32.075178       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0919 18:06:52.174907       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [1ebe920dc37c5c580b7ccbad283a25add50ea0cd388032186553195a7ad484c1] <==
	* W0919 18:00:30.804464       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0919 18:00:30.817764       1 node.go:135] Successfully retrieved node IP: 192.168.72.182
	I0919 18:00:30.817905       1 server_others.go:149] Using iptables Proxier.
	I0919 18:00:30.820485       1 server.go:529] Version: v1.16.0
	I0919 18:00:30.822360       1 config.go:313] Starting service config controller
	I0919 18:00:30.822424       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0919 18:00:30.828880       1 config.go:131] Starting endpoints config controller
	I0919 18:00:30.834495       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0919 18:00:30.922917       1 shared_informer.go:204] Caches are synced for service config 
	I0919 18:00:30.937834       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [6435a5624e04a25a18446ae337a50dee10c8d226cbebc5b96baa9f9d376d4569] <==
	* W0919 18:00:08.235717       1 authentication.go:79] Authentication is disabled
	I0919 18:00:08.235727       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0919 18:00:08.236154       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0919 18:00:08.311466       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:00:08.317139       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:00:08.317255       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:00:08.317366       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:00:08.317393       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:00:08.317931       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:00:08.321786       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:00:08.321936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 18:00:08.321966       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:00:08.322015       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:00:08.322043       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:00:09.313371       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:00:09.318309       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:00:09.320043       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:00:09.322143       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:00:09.323958       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:00:09.325604       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:00:09.326003       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:00:09.326559       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 18:00:09.328749       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:00:09.330098       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:00:09.330860       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:54:20 UTC, ends at Tue 2023-09-19 18:06:58 UTC. --
	Sep 19 18:03:18 old-k8s-version-100627 kubelet[3204]: E0919 18:03:18.588361    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:03:29 old-k8s-version-100627 kubelet[3204]: E0919 18:03:29.604042    3204 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:03:29 old-k8s-version-100627 kubelet[3204]: E0919 18:03:29.604174    3204 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:03:29 old-k8s-version-100627 kubelet[3204]: E0919 18:03:29.604244    3204 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:03:29 old-k8s-version-100627 kubelet[3204]: E0919 18:03:29.604298    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 19 18:03:40 old-k8s-version-100627 kubelet[3204]: E0919 18:03:40.583613    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:03:51 old-k8s-version-100627 kubelet[3204]: E0919 18:03:51.582464    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:04:05 old-k8s-version-100627 kubelet[3204]: E0919 18:04:05.582969    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:04:18 old-k8s-version-100627 kubelet[3204]: E0919 18:04:18.582969    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:04:30 old-k8s-version-100627 kubelet[3204]: E0919 18:04:30.584674    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:04:42 old-k8s-version-100627 kubelet[3204]: E0919 18:04:42.582277    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:04:55 old-k8s-version-100627 kubelet[3204]: E0919 18:04:55.582481    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:05:00 old-k8s-version-100627 kubelet[3204]: E0919 18:05:00.657760    3204 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 19 18:05:08 old-k8s-version-100627 kubelet[3204]: E0919 18:05:08.582701    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:05:20 old-k8s-version-100627 kubelet[3204]: E0919 18:05:20.582944    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:05:33 old-k8s-version-100627 kubelet[3204]: E0919 18:05:33.582585    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:05:45 old-k8s-version-100627 kubelet[3204]: E0919 18:05:45.583061    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:05:56 old-k8s-version-100627 kubelet[3204]: E0919 18:05:56.582379    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:06:08 old-k8s-version-100627 kubelet[3204]: E0919 18:06:08.582507    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:06:19 old-k8s-version-100627 kubelet[3204]: E0919 18:06:19.608479    3204 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:06:19 old-k8s-version-100627 kubelet[3204]: E0919 18:06:19.608612    3204 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:06:19 old-k8s-version-100627 kubelet[3204]: E0919 18:06:19.608701    3204 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 19 18:06:19 old-k8s-version-100627 kubelet[3204]: E0919 18:06:19.608737    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 19 18:06:30 old-k8s-version-100627 kubelet[3204]: E0919 18:06:30.583100    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 18:06:45 old-k8s-version-100627 kubelet[3204]: E0919 18:06:45.583216    3204 pod_workers.go:191] Error syncing pod d04324e4-19d0-4e5b-ad86-2e290757cc2b ("metrics-server-74d5856cc6-6gls6_kube-system(d04324e4-19d0-4e5b-ad86-2e290757cc2b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [be9296e5b79f28c42d63b18c8ff41a58b2c38d5da0538d7a0a3aea66773b63e9] <==
	* I0919 18:00:31.174328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:00:31.185959       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:00:31.186096       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:00:31.196417       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:00:31.197760       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100627_d664fb39-eb04-47e9-84ac-f2b160b5d137!
	I0919 18:00:31.196747       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"83b31fa0-1fd4-48c3-a508-a5ea7d1557a5", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-100627_d664fb39-eb04-47e9-84ac-f2b160b5d137 became leader
	I0919 18:00:31.298168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-100627_d664fb39-eb04-47e9-84ac-f2b160b5d137!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-100627 -n old-k8s-version-100627
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-100627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-6gls6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-100627 describe pod metrics-server-74d5856cc6-6gls6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-100627 describe pod metrics-server-74d5856cc6-6gls6: exit status 1 (63.046443ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-6gls6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-100627 describe pod metrics-server-74d5856cc6-6gls6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (327.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (349.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215748 -n no-preload-215748
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:07:30.562296729 +0000 UTC m=+5583.494280767
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-215748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-215748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.895µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-215748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-215748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-215748 logs -n 25: (3.659064405s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-367630                            | force-systemd-env-367630     | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:37 UTC |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC | 19 Sep 23 17:52 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100627        | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC | 19 Sep 23 17:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100627             | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC | 19 Sep 23 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 18:06 UTC | 19 Sep 23 18:07 UTC |
	| start   | -p newest-cni-199016 --memory=2200 --alsologtostderr   | newest-cni-199016            | jenkins | v1.31.2 | 19 Sep 23 18:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 18:07:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:07:00.670676   51238 out.go:296] Setting OutFile to fd 1 ...
	I0919 18:07:00.670952   51238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 18:07:00.670963   51238 out.go:309] Setting ErrFile to fd 2...
	I0919 18:07:00.670968   51238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 18:07:00.671142   51238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 18:07:00.671701   51238 out.go:303] Setting JSON to false
	I0919 18:07:00.672616   51238 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6571,"bootTime":1695140250,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:07:00.672672   51238 start.go:138] virtualization: kvm guest
	I0919 18:07:00.674863   51238 out.go:177] * [newest-cni-199016] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:07:00.676218   51238 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 18:07:00.677598   51238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:07:00.676228   51238 notify.go:220] Checking for updates...
	I0919 18:07:00.680499   51238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:07:00.681947   51238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:07:00.683234   51238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:07:00.684495   51238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:07:00.686275   51238 config.go:182] Loaded profile config "default-k8s-diff-port-415555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:00.686401   51238 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:00.686531   51238 config.go:182] Loaded profile config "no-preload-215748": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:00.686623   51238 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 18:07:00.724015   51238 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 18:07:00.725303   51238 start.go:298] selected driver: kvm2
	I0919 18:07:00.725320   51238 start.go:902] validating driver "kvm2" against <nil>
	I0919 18:07:00.725332   51238 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:07:00.726260   51238 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:07:00.726353   51238 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 18:07:00.741475   51238 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 18:07:00.741515   51238 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0919 18:07:00.741536   51238 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0919 18:07:00.741773   51238 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0919 18:07:00.741816   51238 cni.go:84] Creating CNI manager for ""
	I0919 18:07:00.741832   51238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:07:00.741847   51238 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:07:00.741859   51238 start_flags.go:321] config:
	{Name:newest-cni-199016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-199016 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 18:07:00.742025   51238 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:07:00.744121   51238 out.go:177] * Starting control plane node newest-cni-199016 in cluster newest-cni-199016
	I0919 18:07:00.745351   51238 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 18:07:00.745389   51238 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 18:07:00.745398   51238 cache.go:57] Caching tarball of preloaded images
	I0919 18:07:00.745475   51238 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:07:00.745484   51238 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 18:07:00.745566   51238 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/newest-cni-199016/config.json ...
	I0919 18:07:00.745581   51238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/newest-cni-199016/config.json: {Name:mk4f2793cfe378b349ecf6d82f5f4527234b8149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:07:00.745694   51238 start.go:365] acquiring machines lock for newest-cni-199016: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 18:07:00.745719   51238 start.go:369] acquired machines lock for "newest-cni-199016" in 14.668µs
	I0919 18:07:00.745736   51238 start.go:93] Provisioning new machine with config: &{Name:newest-cni-199016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-199016 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:07:00.745833   51238 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 18:07:00.747513   51238 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 18:07:00.747637   51238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:07:00.747672   51238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:07:00.761672   51238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0919 18:07:00.762105   51238 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:07:00.762663   51238 main.go:141] libmachine: Using API Version  1
	I0919 18:07:00.762686   51238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:07:00.763052   51238 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:07:00.763313   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetMachineName
	I0919 18:07:00.763539   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:00.763761   51238 start.go:159] libmachine.API.Create for "newest-cni-199016" (driver="kvm2")
	I0919 18:07:00.763794   51238 client.go:168] LocalClient.Create starting
	I0919 18:07:00.763830   51238 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 18:07:00.763872   51238 main.go:141] libmachine: Decoding PEM data...
	I0919 18:07:00.763903   51238 main.go:141] libmachine: Parsing certificate...
	I0919 18:07:00.763971   51238 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 18:07:00.763995   51238 main.go:141] libmachine: Decoding PEM data...
	I0919 18:07:00.764023   51238 main.go:141] libmachine: Parsing certificate...
	I0919 18:07:00.764048   51238 main.go:141] libmachine: Running pre-create checks...
	I0919 18:07:00.764066   51238 main.go:141] libmachine: (newest-cni-199016) Calling .PreCreateCheck
	I0919 18:07:00.764489   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetConfigRaw
	I0919 18:07:00.764955   51238 main.go:141] libmachine: Creating machine...
	I0919 18:07:00.764970   51238 main.go:141] libmachine: (newest-cni-199016) Calling .Create
	I0919 18:07:00.765121   51238 main.go:141] libmachine: (newest-cni-199016) Creating KVM machine...
	I0919 18:07:00.766322   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found existing default KVM network
	I0919 18:07:00.767469   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:00.767287   51261 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:8e:d5:60} reservation:<nil>}
	I0919 18:07:00.768322   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:00.768246   51261 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:62:ff:52} reservation:<nil>}
	I0919 18:07:00.769135   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:00.769060   51261 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:09:d8:89} reservation:<nil>}
	I0919 18:07:00.770061   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:00.769990   51261 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c36b0}
	I0919 18:07:00.775457   51238 main.go:141] libmachine: (newest-cni-199016) DBG | trying to create private KVM network mk-newest-cni-199016 192.168.72.0/24...
	I0919 18:07:00.850853   51238 main.go:141] libmachine: (newest-cni-199016) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016 ...
	I0919 18:07:00.850899   51238 main.go:141] libmachine: (newest-cni-199016) DBG | private KVM network mk-newest-cni-199016 192.168.72.0/24 created
	I0919 18:07:00.850934   51238 main.go:141] libmachine: (newest-cni-199016) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 18:07:00.850964   51238 main.go:141] libmachine: (newest-cni-199016) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 18:07:00.851014   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:00.850649   51261 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:07:01.065624   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:01.065489   51261 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa...
	I0919 18:07:01.170293   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:01.170170   51261 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/newest-cni-199016.rawdisk...
	I0919 18:07:01.170325   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Writing magic tar header
	I0919 18:07:01.170339   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Writing SSH key tar header
	I0919 18:07:01.170354   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:01.170276   51261 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016 ...
	I0919 18:07:01.170444   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016
	I0919 18:07:01.170481   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 18:07:01.170502   51238 main.go:141] libmachine: (newest-cni-199016) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016 (perms=drwx------)
	I0919 18:07:01.170522   51238 main.go:141] libmachine: (newest-cni-199016) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 18:07:01.170533   51238 main.go:141] libmachine: (newest-cni-199016) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 18:07:01.170545   51238 main.go:141] libmachine: (newest-cni-199016) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 18:07:01.170555   51238 main.go:141] libmachine: (newest-cni-199016) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 18:07:01.170565   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:07:01.170580   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 18:07:01.170602   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 18:07:01.170621   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home/jenkins
	I0919 18:07:01.170628   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Checking permissions on dir: /home
	I0919 18:07:01.170638   51238 main.go:141] libmachine: (newest-cni-199016) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 18:07:01.170648   51238 main.go:141] libmachine: (newest-cni-199016) Creating domain...
	I0919 18:07:01.170660   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Skipping /home - not owner
	I0919 18:07:01.171686   51238 main.go:141] libmachine: (newest-cni-199016) define libvirt domain using xml: 
	I0919 18:07:01.171701   51238 main.go:141] libmachine: (newest-cni-199016) <domain type='kvm'>
	I0919 18:07:01.171715   51238 main.go:141] libmachine: (newest-cni-199016)   <name>newest-cni-199016</name>
	I0919 18:07:01.171732   51238 main.go:141] libmachine: (newest-cni-199016)   <memory unit='MiB'>2200</memory>
	I0919 18:07:01.171747   51238 main.go:141] libmachine: (newest-cni-199016)   <vcpu>2</vcpu>
	I0919 18:07:01.171763   51238 main.go:141] libmachine: (newest-cni-199016)   <features>
	I0919 18:07:01.171774   51238 main.go:141] libmachine: (newest-cni-199016)     <acpi/>
	I0919 18:07:01.171785   51238 main.go:141] libmachine: (newest-cni-199016)     <apic/>
	I0919 18:07:01.171798   51238 main.go:141] libmachine: (newest-cni-199016)     <pae/>
	I0919 18:07:01.171807   51238 main.go:141] libmachine: (newest-cni-199016)     
	I0919 18:07:01.171821   51238 main.go:141] libmachine: (newest-cni-199016)   </features>
	I0919 18:07:01.171834   51238 main.go:141] libmachine: (newest-cni-199016)   <cpu mode='host-passthrough'>
	I0919 18:07:01.171894   51238 main.go:141] libmachine: (newest-cni-199016)   
	I0919 18:07:01.171919   51238 main.go:141] libmachine: (newest-cni-199016)   </cpu>
	I0919 18:07:01.171932   51238 main.go:141] libmachine: (newest-cni-199016)   <os>
	I0919 18:07:01.171946   51238 main.go:141] libmachine: (newest-cni-199016)     <type>hvm</type>
	I0919 18:07:01.171961   51238 main.go:141] libmachine: (newest-cni-199016)     <boot dev='cdrom'/>
	I0919 18:07:01.171975   51238 main.go:141] libmachine: (newest-cni-199016)     <boot dev='hd'/>
	I0919 18:07:01.171989   51238 main.go:141] libmachine: (newest-cni-199016)     <bootmenu enable='no'/>
	I0919 18:07:01.172007   51238 main.go:141] libmachine: (newest-cni-199016)   </os>
	I0919 18:07:01.172022   51238 main.go:141] libmachine: (newest-cni-199016)   <devices>
	I0919 18:07:01.172037   51238 main.go:141] libmachine: (newest-cni-199016)     <disk type='file' device='cdrom'>
	I0919 18:07:01.172057   51238 main.go:141] libmachine: (newest-cni-199016)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/boot2docker.iso'/>
	I0919 18:07:01.172073   51238 main.go:141] libmachine: (newest-cni-199016)       <target dev='hdc' bus='scsi'/>
	I0919 18:07:01.172090   51238 main.go:141] libmachine: (newest-cni-199016)       <readonly/>
	I0919 18:07:01.172105   51238 main.go:141] libmachine: (newest-cni-199016)     </disk>
	I0919 18:07:01.172119   51238 main.go:141] libmachine: (newest-cni-199016)     <disk type='file' device='disk'>
	I0919 18:07:01.172136   51238 main.go:141] libmachine: (newest-cni-199016)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 18:07:01.172155   51238 main.go:141] libmachine: (newest-cni-199016)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/newest-cni-199016.rawdisk'/>
	I0919 18:07:01.172177   51238 main.go:141] libmachine: (newest-cni-199016)       <target dev='hda' bus='virtio'/>
	I0919 18:07:01.172190   51238 main.go:141] libmachine: (newest-cni-199016)     </disk>
	I0919 18:07:01.172219   51238 main.go:141] libmachine: (newest-cni-199016)     <interface type='network'>
	I0919 18:07:01.172243   51238 main.go:141] libmachine: (newest-cni-199016)       <source network='mk-newest-cni-199016'/>
	I0919 18:07:01.172255   51238 main.go:141] libmachine: (newest-cni-199016)       <model type='virtio'/>
	I0919 18:07:01.172264   51238 main.go:141] libmachine: (newest-cni-199016)     </interface>
	I0919 18:07:01.172276   51238 main.go:141] libmachine: (newest-cni-199016)     <interface type='network'>
	I0919 18:07:01.172291   51238 main.go:141] libmachine: (newest-cni-199016)       <source network='default'/>
	I0919 18:07:01.172306   51238 main.go:141] libmachine: (newest-cni-199016)       <model type='virtio'/>
	I0919 18:07:01.172319   51238 main.go:141] libmachine: (newest-cni-199016)     </interface>
	I0919 18:07:01.172334   51238 main.go:141] libmachine: (newest-cni-199016)     <serial type='pty'>
	I0919 18:07:01.172346   51238 main.go:141] libmachine: (newest-cni-199016)       <target port='0'/>
	I0919 18:07:01.172357   51238 main.go:141] libmachine: (newest-cni-199016)     </serial>
	I0919 18:07:01.172371   51238 main.go:141] libmachine: (newest-cni-199016)     <console type='pty'>
	I0919 18:07:01.172396   51238 main.go:141] libmachine: (newest-cni-199016)       <target type='serial' port='0'/>
	I0919 18:07:01.172427   51238 main.go:141] libmachine: (newest-cni-199016)     </console>
	I0919 18:07:01.172454   51238 main.go:141] libmachine: (newest-cni-199016)     <rng model='virtio'>
	I0919 18:07:01.172470   51238 main.go:141] libmachine: (newest-cni-199016)       <backend model='random'>/dev/random</backend>
	I0919 18:07:01.172484   51238 main.go:141] libmachine: (newest-cni-199016)     </rng>
	I0919 18:07:01.172502   51238 main.go:141] libmachine: (newest-cni-199016)     
	I0919 18:07:01.172516   51238 main.go:141] libmachine: (newest-cni-199016)     
	I0919 18:07:01.172533   51238 main.go:141] libmachine: (newest-cni-199016)   </devices>
	I0919 18:07:01.172547   51238 main.go:141] libmachine: (newest-cni-199016) </domain>
	I0919 18:07:01.172559   51238 main.go:141] libmachine: (newest-cni-199016) 
	I0919 18:07:01.177006   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:d5:31:80 in network default
	I0919 18:07:01.177533   51238 main.go:141] libmachine: (newest-cni-199016) Ensuring networks are active...
	I0919 18:07:01.177550   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:01.178330   51238 main.go:141] libmachine: (newest-cni-199016) Ensuring network default is active
	I0919 18:07:01.178698   51238 main.go:141] libmachine: (newest-cni-199016) Ensuring network mk-newest-cni-199016 is active
	I0919 18:07:01.179209   51238 main.go:141] libmachine: (newest-cni-199016) Getting domain xml...
	I0919 18:07:01.179921   51238 main.go:141] libmachine: (newest-cni-199016) Creating domain...
	I0919 18:07:02.466290   51238 main.go:141] libmachine: (newest-cni-199016) Waiting to get IP...
	I0919 18:07:02.467089   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:02.467561   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:02.467590   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:02.467516   51261 retry.go:31] will retry after 193.981571ms: waiting for machine to come up
	I0919 18:07:02.662971   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:02.663577   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:02.663607   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:02.663527   51261 retry.go:31] will retry after 296.795884ms: waiting for machine to come up
	I0919 18:07:02.962036   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:02.962510   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:02.962549   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:02.962473   51261 retry.go:31] will retry after 298.24665ms: waiting for machine to come up
	I0919 18:07:03.261891   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:03.262311   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:03.262341   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:03.262282   51261 retry.go:31] will retry after 522.263243ms: waiting for machine to come up
	I0919 18:07:03.786443   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:03.786887   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:03.786917   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:03.786852   51261 retry.go:31] will retry after 573.218679ms: waiting for machine to come up
	I0919 18:07:04.361379   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:04.361772   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:04.361805   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:04.361716   51261 retry.go:31] will retry after 705.716815ms: waiting for machine to come up
	I0919 18:07:05.068649   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:05.069181   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:05.069217   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:05.069118   51261 retry.go:31] will retry after 910.159429ms: waiting for machine to come up
	I0919 18:07:05.980655   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:05.981250   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:05.981276   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:05.981200   51261 retry.go:31] will retry after 1.176142766s: waiting for machine to come up
	I0919 18:07:07.159636   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:07.160154   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:07.160196   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:07.160094   51261 retry.go:31] will retry after 1.652703075s: waiting for machine to come up
	I0919 18:07:08.814455   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:08.814948   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:08.814980   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:08.814879   51261 retry.go:31] will retry after 1.440768199s: waiting for machine to come up
	I0919 18:07:10.256857   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:10.257380   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:10.257411   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:10.257325   51261 retry.go:31] will retry after 2.820995785s: waiting for machine to come up
	I0919 18:07:13.079808   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:13.080401   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:13.080443   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:13.080349   51261 retry.go:31] will retry after 3.097183065s: waiting for machine to come up
	I0919 18:07:16.179390   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:16.179888   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:16.179911   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:16.179846   51261 retry.go:31] will retry after 3.930752364s: waiting for machine to come up
	I0919 18:07:20.113590   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:20.114078   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find current IP address of domain newest-cni-199016 in network mk-newest-cni-199016
	I0919 18:07:20.114102   51238 main.go:141] libmachine: (newest-cni-199016) DBG | I0919 18:07:20.114024   51261 retry.go:31] will retry after 3.542814545s: waiting for machine to come up
	I0919 18:07:23.660340   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.660956   51238 main.go:141] libmachine: (newest-cni-199016) Found IP for machine: 192.168.72.220
	I0919 18:07:23.660975   51238 main.go:141] libmachine: (newest-cni-199016) Reserving static IP address...
	I0919 18:07:23.660986   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has current primary IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.661378   51238 main.go:141] libmachine: (newest-cni-199016) DBG | unable to find host DHCP lease matching {name: "newest-cni-199016", mac: "52:54:00:9b:f1:d4", ip: "192.168.72.220"} in network mk-newest-cni-199016
	I0919 18:07:23.740805   51238 main.go:141] libmachine: (newest-cni-199016) Reserved static IP address: 192.168.72.220
	I0919 18:07:23.740842   51238 main.go:141] libmachine: (newest-cni-199016) Waiting for SSH to be available...
	I0919 18:07:23.740856   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Getting to WaitForSSH function...
	I0919 18:07:23.743286   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.743658   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:23.743690   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.743867   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Using SSH client type: external
	I0919 18:07:23.743899   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa (-rw-------)
	I0919 18:07:23.743953   51238 main.go:141] libmachine: (newest-cni-199016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 18:07:23.743981   51238 main.go:141] libmachine: (newest-cni-199016) DBG | About to run SSH command:
	I0919 18:07:23.743990   51238 main.go:141] libmachine: (newest-cni-199016) DBG | exit 0
	I0919 18:07:23.836799   51238 main.go:141] libmachine: (newest-cni-199016) DBG | SSH cmd err, output: <nil>: 
	I0919 18:07:23.837091   51238 main.go:141] libmachine: (newest-cni-199016) KVM machine creation complete!
	I0919 18:07:23.837476   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetConfigRaw
	I0919 18:07:23.838057   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:23.838263   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:23.838483   51238 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 18:07:23.838516   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetState
	I0919 18:07:23.839730   51238 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 18:07:23.839744   51238 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 18:07:23.839750   51238 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 18:07:23.839757   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:23.842514   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.842939   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:23.842987   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.843235   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:23.843431   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:23.843606   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:23.843791   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:23.844003   51238 main.go:141] libmachine: Using SSH client type: native
	I0919 18:07:23.844365   51238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I0919 18:07:23.844380   51238 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 18:07:23.963812   51238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:07:23.963838   51238 main.go:141] libmachine: Detecting the provisioner...
	I0919 18:07:23.963847   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:23.966752   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.967132   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:23.967157   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:23.967332   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:23.967544   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:23.967725   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:23.967863   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:23.968014   51238 main.go:141] libmachine: Using SSH client type: native
	I0919 18:07:23.968332   51238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I0919 18:07:23.968342   51238 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 18:07:24.089401   51238 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 18:07:24.089498   51238 main.go:141] libmachine: found compatible host: buildroot
	I0919 18:07:24.089510   51238 main.go:141] libmachine: Provisioning with buildroot...
	I0919 18:07:24.089523   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetMachineName
	I0919 18:07:24.089824   51238 buildroot.go:166] provisioning hostname "newest-cni-199016"
	I0919 18:07:24.089856   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetMachineName
	I0919 18:07:24.090018   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:24.092676   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.092974   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:24.093013   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.093113   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:24.093285   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.093496   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.093644   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:24.093820   51238 main.go:141] libmachine: Using SSH client type: native
	I0919 18:07:24.094266   51238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I0919 18:07:24.094294   51238 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-199016 && echo "newest-cni-199016" | sudo tee /etc/hostname
	I0919 18:07:24.226297   51238 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-199016
	
	I0919 18:07:24.226320   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:24.229358   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.229691   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:24.229727   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.229958   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:24.230174   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.230408   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.230586   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:24.230767   51238 main.go:141] libmachine: Using SSH client type: native
	I0919 18:07:24.231279   51238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I0919 18:07:24.231301   51238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-199016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-199016/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-199016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:07:24.362204   51238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:07:24.362235   51238 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-6042/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-6042/.minikube}
	I0919 18:07:24.362294   51238 buildroot.go:174] setting up certificates
	I0919 18:07:24.362317   51238 provision.go:83] configureAuth start
	I0919 18:07:24.362341   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetMachineName
	I0919 18:07:24.362654   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetIP
	I0919 18:07:24.365589   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.366024   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:24.366052   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.366169   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:24.368309   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.368656   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:24.368682   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.368845   51238 provision.go:138] copyHostCerts
	I0919 18:07:24.368901   51238 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem, removing ...
	I0919 18:07:24.368914   51238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem
	I0919 18:07:24.369000   51238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/cert.pem (1123 bytes)
	I0919 18:07:24.369135   51238 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem, removing ...
	I0919 18:07:24.369148   51238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem
	I0919 18:07:24.369186   51238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/key.pem (1675 bytes)
	I0919 18:07:24.369255   51238 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem, removing ...
	I0919 18:07:24.369265   51238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem
	I0919 18:07:24.369297   51238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-6042/.minikube/ca.pem (1082 bytes)
	I0919 18:07:24.369355   51238 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca-key.pem org=jenkins.newest-cni-199016 san=[192.168.72.220 192.168.72.220 localhost 127.0.0.1 minikube newest-cni-199016]
	I0919 18:07:24.627901   51238 provision.go:172] copyRemoteCerts
	I0919 18:07:24.627954   51238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:07:24.627975   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:24.630579   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.630921   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:24.630958   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.631151   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:24.631389   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.631574   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:24.631711   51238 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa Username:docker}
	I0919 18:07:24.722192   51238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 18:07:24.749467   51238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:07:24.774422   51238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0919 18:07:24.800060   51238 provision.go:86] duration metric: configureAuth took 437.726703ms
	I0919 18:07:24.800086   51238 buildroot.go:189] setting minikube options for container-runtime
	I0919 18:07:24.800298   51238 config.go:182] Loaded profile config "newest-cni-199016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:24.800365   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:24.803457   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.804000   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:24.804039   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:24.804158   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:24.804378   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.804595   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:24.804768   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:24.804966   51238 main.go:141] libmachine: Using SSH client type: native
	I0919 18:07:24.805325   51238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I0919 18:07:24.805343   51238 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:07:25.138871   51238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:07:25.138900   51238 main.go:141] libmachine: Checking connection to Docker...
	I0919 18:07:25.138911   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetURL
	I0919 18:07:25.140284   51238 main.go:141] libmachine: (newest-cni-199016) DBG | Using libvirt version 6000000
	I0919 18:07:25.142498   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.142924   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.142960   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.143118   51238 main.go:141] libmachine: Docker is up and running!
	I0919 18:07:25.143140   51238 main.go:141] libmachine: Reticulating splines...
	I0919 18:07:25.143146   51238 client.go:171] LocalClient.Create took 24.379343138s
	I0919 18:07:25.143167   51238 start.go:167] duration metric: libmachine.API.Create for "newest-cni-199016" took 24.379406835s
	I0919 18:07:25.143176   51238 start.go:300] post-start starting for "newest-cni-199016" (driver="kvm2")
	I0919 18:07:25.143184   51238 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:07:25.143202   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:25.143441   51238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:07:25.143467   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:25.145860   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.146197   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.146233   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.146378   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:25.146578   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:25.146706   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:25.146820   51238 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa Username:docker}
	I0919 18:07:25.233927   51238 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:07:25.238179   51238 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 18:07:25.238220   51238 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/addons for local assets ...
	I0919 18:07:25.238288   51238 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-6042/.minikube/files for local assets ...
	I0919 18:07:25.238371   51238 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem -> 132392.pem in /etc/ssl/certs
	I0919 18:07:25.238474   51238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 18:07:25.246849   51238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/ssl/certs/132392.pem --> /etc/ssl/certs/132392.pem (1708 bytes)
	I0919 18:07:25.271035   51238 start.go:303] post-start completed in 127.846511ms
	I0919 18:07:25.271103   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetConfigRaw
	I0919 18:07:25.271712   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetIP
	I0919 18:07:25.274371   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.274785   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.274817   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.275025   51238 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/newest-cni-199016/config.json ...
	I0919 18:07:25.275284   51238 start.go:128] duration metric: createHost completed in 24.529430845s
	I0919 18:07:25.275314   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:25.277821   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.278118   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.278161   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.278351   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:25.278558   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:25.278714   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:25.278866   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:25.279019   51238 main.go:141] libmachine: Using SSH client type: native
	I0919 18:07:25.279310   51238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.220 22 <nil> <nil>}
	I0919 18:07:25.279322   51238 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 18:07:25.401538   51238 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695146845.379846313
	
	I0919 18:07:25.401564   51238 fix.go:206] guest clock: 1695146845.379846313
	I0919 18:07:25.401573   51238 fix.go:219] Guest: 2023-09-19 18:07:25.379846313 +0000 UTC Remote: 2023-09-19 18:07:25.275299004 +0000 UTC m=+24.633752670 (delta=104.547309ms)
	I0919 18:07:25.401596   51238 fix.go:190] guest clock delta is within tolerance: 104.547309ms
	I0919 18:07:25.401602   51238 start.go:83] releasing machines lock for "newest-cni-199016", held for 24.655873533s
	I0919 18:07:25.401625   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:25.401893   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetIP
	I0919 18:07:25.404696   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.405116   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.405154   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.405302   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:25.405780   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:25.405949   51238 main.go:141] libmachine: (newest-cni-199016) Calling .DriverName
	I0919 18:07:25.406061   51238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:07:25.406109   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:25.406155   51238 ssh_runner.go:195] Run: cat /version.json
	I0919 18:07:25.406177   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHHostname
	I0919 18:07:25.408756   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.409022   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.409101   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.409139   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.409231   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:25.409385   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:25.409407   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:25.409424   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:25.409556   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:25.409636   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHPort
	I0919 18:07:25.409725   51238 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa Username:docker}
	I0919 18:07:25.409788   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHKeyPath
	I0919 18:07:25.409934   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetSSHUsername
	I0919 18:07:25.410107   51238 sshutil.go:53] new ssh client: &{IP:192.168.72.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/newest-cni-199016/id_rsa Username:docker}
	I0919 18:07:25.515982   51238 ssh_runner.go:195] Run: systemctl --version
	I0919 18:07:25.522596   51238 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:07:25.685101   51238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 18:07:25.691971   51238 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 18:07:25.692049   51238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:07:25.708521   51238 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 18:07:25.708548   51238 start.go:469] detecting cgroup driver to use...
	I0919 18:07:25.708617   51238 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:07:25.722753   51238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:07:25.737471   51238 docker.go:196] disabling cri-docker service (if available) ...
	I0919 18:07:25.737530   51238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:07:25.752744   51238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:07:25.767368   51238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:07:25.876048   51238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:07:25.998992   51238 docker.go:212] disabling docker service ...
	I0919 18:07:25.999060   51238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:07:26.013379   51238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:07:26.026235   51238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:07:26.145243   51238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:07:26.273135   51238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:07:26.287073   51238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:07:26.306519   51238 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0919 18:07:26.306588   51238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:07:26.315890   51238 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:07:26.315958   51238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:07:26.325673   51238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:07:26.334929   51238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:07:26.344059   51238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:07:26.354121   51238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:07:26.362176   51238 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 18:07:26.362235   51238 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 18:07:26.375339   51238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:07:26.385589   51238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:07:26.516076   51238 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:07:26.691506   51238 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:07:26.691586   51238 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:07:26.698773   51238 start.go:537] Will wait 60s for crictl version
	I0919 18:07:26.698836   51238 ssh_runner.go:195] Run: which crictl
	I0919 18:07:26.703056   51238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:07:26.755384   51238 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0919 18:07:26.755477   51238 ssh_runner.go:195] Run: crio --version
	I0919 18:07:26.806155   51238 ssh_runner.go:195] Run: crio --version
	I0919 18:07:26.866907   51238 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I0919 18:07:26.868464   51238 main.go:141] libmachine: (newest-cni-199016) Calling .GetIP
	I0919 18:07:26.870996   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:26.871351   51238 main.go:141] libmachine: (newest-cni-199016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:f1:d4", ip: ""} in network mk-newest-cni-199016: {Iface:virbr1 ExpiryTime:2023-09-19 19:07:17 +0000 UTC Type:0 Mac:52:54:00:9b:f1:d4 Iaid: IPaddr:192.168.72.220 Prefix:24 Hostname:newest-cni-199016 Clientid:01:52:54:00:9b:f1:d4}
	I0919 18:07:26.871378   51238 main.go:141] libmachine: (newest-cni-199016) DBG | domain newest-cni-199016 has defined IP address 192.168.72.220 and MAC address 52:54:00:9b:f1:d4 in network mk-newest-cni-199016
	I0919 18:07:26.871540   51238 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0919 18:07:26.876038   51238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:07:26.889909   51238 localpath.go:92] copying /home/jenkins/minikube-integration/17240-6042/.minikube/client.crt -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/newest-cni-199016/client.crt
	I0919 18:07:26.890044   51238 localpath.go:117] copying /home/jenkins/minikube-integration/17240-6042/.minikube/client.key -> /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/newest-cni-199016/client.key
	I0919 18:07:26.891759   51238 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0919 18:07:26.893318   51238 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 18:07:26.893403   51238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:07:26.927136   51238 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I0919 18:07:26.927194   51238 ssh_runner.go:195] Run: which lz4
	I0919 18:07:26.932095   51238 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 18:07:26.936727   51238 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 18:07:26.936755   51238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I0919 18:07:28.764447   51238 crio.go:444] Took 1.832374 seconds to copy over tarball
	I0919 18:07:28.764524   51238 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:46:59 UTC, ends at Tue 2023-09-19 18:07:31 UTC. --
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.340367623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146851340347940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d4e92898-79a2-474d-be5c-1c0599a4ed41 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.341894464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=47a49a2e-e58a-4d4a-bea4-5343ef32b97b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.341994149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=47a49a2e-e58a-4d4a-bea4-5343ef32b97b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.342246776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47a49a2e-e58a-4d4a-bea4-5343ef32b97b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.391776567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=62b4288b-aec8-4423-910d-fdb44efc14ef name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.391860338Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=62b4288b-aec8-4423-910d-fdb44efc14ef name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.393745576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cb80b29f-7176-4d38-8194-ce79cf7cf8ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.394101301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146851394085577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=cb80b29f-7176-4d38-8194-ce79cf7cf8ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.394844124Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49df3123-1a97-4da5-8f8b-08ebae963f83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.394894410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49df3123-1a97-4da5-8f8b-08ebae963f83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.395047098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49df3123-1a97-4da5-8f8b-08ebae963f83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.442246985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7ee22319-e1a3-4315-93ba-235db5e2a32a name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.442335660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7ee22319-e1a3-4315-93ba-235db5e2a32a name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.445048270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a1ea8e1f-8438-441a-8841-76d6d478cdd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.445501141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146851445477633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a1ea8e1f-8438-441a-8841-76d6d478cdd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.446563088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=64560675-62c6-4d32-a039-ea2b1b2fec13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.446735467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=64560675-62c6-4d32-a039-ea2b1b2fec13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.447005907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=64560675-62c6-4d32-a039-ea2b1b2fec13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.490534715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0088eceb-a223-4d4d-894d-45e3cbc0f864 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.490728480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0088eceb-a223-4d4d-894d-45e3cbc0f864 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.492230066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c8c2336f-55d6-4770-9a0e-db6758b3af6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.493070643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146851493049041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c8c2336f-55d6-4770-9a0e-db6758b3af6c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.493825826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=619c6a73-0e2b-4dce-94ca-104dbaddf50f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.493892047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=619c6a73-0e2b-4dce-94ca-104dbaddf50f name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:31 no-preload-215748 crio[725]: time="2023-09-19 18:07:31.494112783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0,PodSandboxId:2ceccaf32c2604f9e32dd74cd7cb50a2fdf6bc2d3e03f0b9f5aab24c2c97237b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1695145961269647877,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c8d577d-e182-4428-bb14-10b679241771,},Annotations:map[string]string{io.kubernetes.container.hash: b1a22052,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8,PodSandboxId:7ae3d7c4db9bf391060780ee0816c2f40e1f2ae0ec1f1514d9d364602560aaf5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1695145960572508095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n478x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9824f626-4a8b-485e-aa70-d88b5ebfb085,},Annotations:map[string]string{io.kubernetes.container.hash: 495fc4ea,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf,PodSandboxId:9967028da52b46f2a80642eacbf8d46c0412eb643c20712b5987379877ac450c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1695145959042729592,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hk6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 1512b039-4c8e-45bc-bbca-82215ea569eb,},Annotations:map[string]string{io.kubernetes.container.hash: e3df76e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10,PodSandboxId:a442fa9c6b7c40b30e229b9e68c97a7c05caca0882298a15ef9d2f48e4a9661a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1695145937823443305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0deda193e5726e1a257e3361c4c96b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 63532bb8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5,PodSandboxId:47e2bdc83d92fe8b69d5b1c6c6d822924a64b4f5e646d84cd551fc32ecc8b93e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1695145937690907486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f07e1c562a8eb398f595a76e2f65e99,},Annotations:map
[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17,PodSandboxId:38cb6bd4bb8b77197263be98d8450acd9483e3375518d07373f489689a929363,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1695145937460128631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d1bfbc570aa408ddbdd6003
f434ee6,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53,PodSandboxId:29db978b9b0883902b3f7bb946100d679ea7660856a06869f005738ba59206f6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1695145937293805526,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-215748,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e19b2d9aefcad7a764a7355031330539,},A
nnotations:map[string]string{io.kubernetes.container.hash: e44faa14,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=619c6a73-0e2b-4dce-94ca-104dbaddf50f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b7f19f67260b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   2ceccaf32c260       storage-provisioner
	031b71aecf891       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   7ae3d7c4db9bf       coredns-5dd5756b68-n478x
	ee3fd4f8b5459       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   14 minutes ago      Running             kube-proxy                0                   9967028da52b4       kube-proxy-hk6k2
	3dee5d1bd72fd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   a442fa9c6b7c4       etcd-no-preload-215748
	093aa73f970a7       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   15 minutes ago      Running             kube-scheduler            2                   47e2bdc83d92f       kube-scheduler-no-preload-215748
	4f81a863b5a96       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   15 minutes ago      Running             kube-controller-manager   2                   38cb6bd4bb8b7       kube-controller-manager-no-preload-215748
	4c5b31233fe26       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   15 minutes ago      Running             kube-apiserver            2                   29db978b9b088       kube-apiserver-no-preload-215748
	
	* 
	* ==> coredns [031b71aecf891fc8e80074947b84d74bc160beb9e209a32ecee2905645268fd8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53752 - 38869 "HINFO IN 5738684559045176053.3221094870797899799. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015826552s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-215748
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-215748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=no-preload-215748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_52_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:52:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-215748
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 18:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 18:02:57 +0000   Tue, 19 Sep 2023 17:52:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 18:02:57 +0000   Tue, 19 Sep 2023 17:52:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 18:02:57 +0000   Tue, 19 Sep 2023 17:52:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 18:02:57 +0000   Tue, 19 Sep 2023 17:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    no-preload-215748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e0fa384a46344cdacc88ca2dc5a26a7
	  System UUID:                2e0fa384-a463-44cd-acc8-8ca2dc5a26a7
	  Boot ID:                    36753ce6-6f89-4c93-a64d-f62619ce8891
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-n478x                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-215748                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-215748             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-215748    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hk6k2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-215748             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-nwxvc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-215748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-215748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-215748 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node no-preload-215748 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-215748 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-215748 event: Registered Node no-preload-215748 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071309] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.392990] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.348083] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150874] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Sep19 17:47] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.397031] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.105390] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.133593] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.108373] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.223767] systemd-fstab-generator[710]: Ignoring "noauto" for root device
	[ +30.502310] systemd-fstab-generator[1227]: Ignoring "noauto" for root device
	[ +19.349722] kauditd_printk_skb: 29 callbacks suppressed
	[Sep19 17:52] systemd-fstab-generator[3819]: Ignoring "noauto" for root device
	[  +9.313451] systemd-fstab-generator[4146]: Ignoring "noauto" for root device
	[ +14.170864] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [3dee5d1bd72fd2db1eff9a774460f402b3092727ce536436f19e19085b2bef10] <==
	* {"level":"info","ts":"2023-09-19T17:52:20.092869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:20.092886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f received MsgPreVoteResp from aadd773bb1fe5a6f at term 1"}
	{"level":"info","ts":"2023-09-19T17:52:20.092897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.092902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f received MsgVoteResp from aadd773bb1fe5a6f at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.092915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aadd773bb1fe5a6f became leader at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.092922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aadd773bb1fe5a6f elected leader aadd773bb1fe5a6f at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:20.096875Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aadd773bb1fe5a6f","local-member-attributes":"{Name:no-preload-215748 ClientURLs:[https://192.168.39.15:2379]}","request-path":"/0/members/aadd773bb1fe5a6f/attributes","cluster-id":"546e0a293cd37a14","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:52:20.097135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:20.098897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:20.109624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:20.109717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:20.110194Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:20.110947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.15:2379"}
	{"level":"info","ts":"2023-09-19T17:52:20.111264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:52:20.119067Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:20.119173Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:20.119219Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-09-19T17:54:36.74396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.85436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-19T17:54:36.744565Z","caller":"traceutil/trace.go:171","msg":"trace[1339816459] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:530; }","duration":"205.517583ms","start":"2023-09-19T17:54:36.539006Z","end":"2023-09-19T17:54:36.744524Z","steps":["trace[1339816459] 'range keys from in-memory index tree'  (duration: 204.783072ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:02:20.196924Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":663}
	{"level":"info","ts":"2023-09-19T18:02:20.20015Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":663,"took":"2.325682ms","hash":1240526975}
	{"level":"info","ts":"2023-09-19T18:02:20.200353Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1240526975,"revision":663,"compact-revision":-1}
	{"level":"info","ts":"2023-09-19T18:07:20.206163Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":906}
	{"level":"info","ts":"2023-09-19T18:07:20.208094Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":906,"took":"1.287328ms","hash":4088293407}
	{"level":"info","ts":"2023-09-19T18:07:20.208281Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4088293407,"revision":906,"compact-revision":663}
	
	* 
	* ==> kernel <==
	*  18:07:31 up 20 min,  0 users,  load average: 0.33, 0.19, 0.19
	Linux no-preload-215748 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4c5b31233fe26918d1ca33cfb16d339b1b8be7aa1b7f9f2e956b7a9ed96bba53] <==
	* W0919 18:03:22.940403       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:03:22.940472       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:03:22.940496       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:04:21.853077       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:05:21.852544       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:05:22.939629       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:05:22.939944       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:05:22.940076       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:05:22.940808       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:05:22.940870       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:05:22.942057       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:06:21.853137       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:07:21.853583       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:07:21.943092       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:21.943228       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:07:21.943546       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:07:22.943398       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:22.943538       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:07:22.943569       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:07:22.943727       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:22.943788       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:07:22.945170       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [4f81a863b5a96b2c3d361126555db604cb23a09d86b30980822178f51b436f17] <==
	* I0919 18:01:37.941517       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:02:07.567396       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:02:07.952570       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:02:37.575024       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:02:37.963229       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:03:07.584371       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:03:07.972356       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:03:37.591769       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:03:37.981842       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 18:03:49.606031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="371.78µs"
	I0919 18:04:01.599530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.274µs"
	E0919 18:04:07.598819       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:04:07.992153       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:04:37.606078       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:04:38.002067       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:05:07.611509       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:05:08.011405       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:05:37.618374       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:05:38.021833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:06:07.624821       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:06:08.030800       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:06:37.632819       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:06:38.042009       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:07:07.638176       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:07:08.053925       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [ee3fd4f8b54591630de21f3fc8db2f8467e507573007c792145948b70e8f1ddf] <==
	* I0919 17:52:40.152208       1 server_others.go:69] "Using iptables proxy"
	I0919 17:52:40.172024       1 node.go:141] Successfully retrieved node IP: 192.168.39.15
	I0919 17:52:40.565091       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:52:40.565150       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:52:40.573498       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:52:40.573574       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:52:40.573958       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:52:40.573969       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:52:40.576012       1 config.go:188] "Starting service config controller"
	I0919 17:52:40.576050       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:52:40.576079       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:52:40.576082       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:52:40.576595       1 config.go:315] "Starting node config controller"
	I0919 17:52:40.576601       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:52:40.677532       1 shared_informer.go:318] Caches are synced for node config
	I0919 17:52:40.677585       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:52:40.677610       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [093aa73f970a7b3cdb64a2038f266925993b11e4a09671d0823e8447103062d5] <==
	* W0919 17:52:21.952185       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:52:21.952192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:52:22.781825       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 17:52:22.781899       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:52:22.783356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:52:22.783403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 17:52:22.828125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:52:22.828192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 17:52:22.844979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:52:22.845060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 17:52:22.916035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:22.916161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:22.923079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:52:22.923208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:52:23.053092       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:23.053162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:23.101789       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:52:23.101872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 17:52:23.103736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:23.103783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:23.172542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:23.172599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:23.188450       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 17:52:23.188503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0919 17:52:24.738304       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:46:59 UTC, ends at Tue 2023-09-19 18:07:34 UTC. --
	Sep 19 18:05:03 no-preload-215748 kubelet[4153]: E0919 18:05:03.582562    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:05:17 no-preload-215748 kubelet[4153]: E0919 18:05:17.582811    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:05:25 no-preload-215748 kubelet[4153]: E0919 18:05:25.701386    4153 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:05:25 no-preload-215748 kubelet[4153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:05:25 no-preload-215748 kubelet[4153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:05:25 no-preload-215748 kubelet[4153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:05:28 no-preload-215748 kubelet[4153]: E0919 18:05:28.581043    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:05:40 no-preload-215748 kubelet[4153]: E0919 18:05:40.581611    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:05:52 no-preload-215748 kubelet[4153]: E0919 18:05:52.581273    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:06:04 no-preload-215748 kubelet[4153]: E0919 18:06:04.580964    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:06:19 no-preload-215748 kubelet[4153]: E0919 18:06:19.582022    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:06:25 no-preload-215748 kubelet[4153]: E0919 18:06:25.702515    4153 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:06:25 no-preload-215748 kubelet[4153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:06:25 no-preload-215748 kubelet[4153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:06:25 no-preload-215748 kubelet[4153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:06:34 no-preload-215748 kubelet[4153]: E0919 18:06:34.581316    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:06:47 no-preload-215748 kubelet[4153]: E0919 18:06:47.581296    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:07:00 no-preload-215748 kubelet[4153]: E0919 18:07:00.581313    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:07:12 no-preload-215748 kubelet[4153]: E0919 18:07:12.581211    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	Sep 19 18:07:25 no-preload-215748 kubelet[4153]: E0919 18:07:25.703451    4153 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:07:25 no-preload-215748 kubelet[4153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:07:25 no-preload-215748 kubelet[4153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:07:25 no-preload-215748 kubelet[4153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:07:25 no-preload-215748 kubelet[4153]: E0919 18:07:25.735576    4153 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Sep 19 18:07:26 no-preload-215748 kubelet[4153]: E0919 18:07:26.581520    4153 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nwxvc" podUID="af38e00c-58bc-455a-bd3e-b9e24ae26d20"
	
	* 
	* ==> storage-provisioner [9b7f19f67260bba942507b84ecaaf683e92581b279ac43e9d17b0f5b0b972be0] <==
	* I0919 17:52:41.467806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 17:52:41.478616       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 17:52:41.478791       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 17:52:41.488900       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 17:52:41.489135       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-215748_1bce18fa-9d8d-4d2b-b5db-0cb56d567d82!
	I0919 17:52:41.491736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e5d2114-7188-4d71-ade0-8ca69d575004", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-215748_1bce18fa-9d8d-4d2b-b5db-0cb56d567d82 became leader
	I0919 17:52:41.589744       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-215748_1bce18fa-9d8d-4d2b-b5db-0cb56d567d82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-215748 -n no-preload-215748
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-215748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nwxvc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-215748 describe pod metrics-server-57f55c9bc5-nwxvc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-215748 describe pod metrics-server-57f55c9bc5-nwxvc: exit status 1 (73.281411ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nwxvc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-215748 describe pod metrics-server-57f55c9bc5-nwxvc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (349.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (343.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0919 18:02:37.109350   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 18:02:56.282487   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 18:03:21.263710   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 18:06:14.060769   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-415155 -n embed-certs-415155
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-19 18:07:56.818244216 +0000 UTC m=+5609.750228288
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-415155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-415155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.435µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-415155 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-415155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-415155 logs -n 25: (1.274768729s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:37 UTC | 19 Sep 23 17:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-415155            | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC | 19 Sep 23 17:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-142729                              | cert-expiration-142729       | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-140688 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | disable-driver-mounts-140688                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:41 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-215748             | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC | 19 Sep 23 17:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-415555  | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC | 19 Sep 23 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:41 UTC |                     |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-415155                 | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-415155                                  | embed-certs-415155           | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-215748                  | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 17:42 UTC | 19 Sep 23 17:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-415555       | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-415555 | jenkins | v1.31.2 | 19 Sep 23 17:43 UTC | 19 Sep 23 17:52 UTC |
	|         | default-k8s-diff-port-415555                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-100627        | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC | 19 Sep 23 17:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:46 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-100627             | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 17:49 UTC | 19 Sep 23 18:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-100627                              | old-k8s-version-100627       | jenkins | v1.31.2 | 19 Sep 23 18:06 UTC | 19 Sep 23 18:07 UTC |
	| start   | -p newest-cni-199016 --memory=2200 --alsologtostderr   | newest-cni-199016            | jenkins | v1.31.2 | 19 Sep 23 18:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-215748                                   | no-preload-215748            | jenkins | v1.31.2 | 19 Sep 23 18:07 UTC | 19 Sep 23 18:07 UTC |
	| start   | -p auto-648984 --memory=3072                           | auto-648984                  | jenkins | v1.31.2 | 19 Sep 23 18:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 18:07:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:07:36.066233   51705 out.go:296] Setting OutFile to fd 1 ...
	I0919 18:07:36.066461   51705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 18:07:36.066469   51705 out.go:309] Setting ErrFile to fd 2...
	I0919 18:07:36.066473   51705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 18:07:36.066634   51705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 18:07:36.067178   51705 out.go:303] Setting JSON to false
	I0919 18:07:36.068023   51705 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6606,"bootTime":1695140250,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:07:36.068080   51705 start.go:138] virtualization: kvm guest
	I0919 18:07:36.070371   51705 out.go:177] * [auto-648984] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:07:36.071903   51705 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 18:07:36.071873   51705 notify.go:220] Checking for updates...
	I0919 18:07:36.073519   51705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:07:36.076226   51705 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 18:07:36.078304   51705 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:07:36.079741   51705 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:07:36.081291   51705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:07:36.083754   51705 config.go:182] Loaded profile config "default-k8s-diff-port-415555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:36.083863   51705 config.go:182] Loaded profile config "embed-certs-415155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:36.083981   51705 config.go:182] Loaded profile config "newest-cni-199016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 18:07:36.084068   51705 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 18:07:36.120920   51705 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 18:07:36.122292   51705 start.go:298] selected driver: kvm2
	I0919 18:07:36.122306   51705 start.go:902] validating driver "kvm2" against <nil>
	I0919 18:07:36.122317   51705 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:07:36.123291   51705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:07:36.123365   51705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 18:07:36.137864   51705 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 18:07:36.137904   51705 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 18:07:36.138083   51705 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:07:36.138114   51705 cni.go:84] Creating CNI manager for ""
	I0919 18:07:36.138123   51705 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:07:36.138130   51705 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 18:07:36.138139   51705 start_flags.go:321] config:
	{Name:auto-648984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-648984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 18:07:36.138286   51705 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:07:36.140034   51705 out.go:177] * Starting control plane node auto-648984 in cluster auto-648984
	I0919 18:07:36.141443   51705 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 18:07:36.141477   51705 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 18:07:36.141486   51705 cache.go:57] Caching tarball of preloaded images
	I0919 18:07:36.141578   51705 preload.go:174] Found /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:07:36.141590   51705 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 18:07:36.141667   51705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/auto-648984/config.json ...
	I0919 18:07:36.141683   51705 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/auto-648984/config.json: {Name:mk0f0fea3bedf992aa9494768c42095805f93eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:07:36.141823   51705 start.go:365] acquiring machines lock for auto-648984: {Name:mk1dfb481040588be213f94a9badd6ebd691d966 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 18:07:36.141855   51705 start.go:369] acquired machines lock for "auto-648984" in 16.78µs
	I0919 18:07:36.141877   51705 start.go:93] Provisioning new machine with config: &{Name:auto-648984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.2 ClusterName:auto-648984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:07:36.141955   51705 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 18:07:36.143752   51705 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 18:07:36.143906   51705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 18:07:36.143952   51705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 18:07:36.157535   51705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
	I0919 18:07:36.157925   51705 main.go:141] libmachine: () Calling .GetVersion
	I0919 18:07:36.158427   51705 main.go:141] libmachine: Using API Version  1
	I0919 18:07:36.158453   51705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 18:07:36.158765   51705 main.go:141] libmachine: () Calling .GetMachineName
	I0919 18:07:36.158969   51705 main.go:141] libmachine: (auto-648984) Calling .GetMachineName
	I0919 18:07:36.159199   51705 main.go:141] libmachine: (auto-648984) Calling .DriverName
	I0919 18:07:36.159357   51705 start.go:159] libmachine.API.Create for "auto-648984" (driver="kvm2")
	I0919 18:07:36.159395   51705 client.go:168] LocalClient.Create starting
	I0919 18:07:36.159430   51705 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/ca.pem
	I0919 18:07:36.159467   51705 main.go:141] libmachine: Decoding PEM data...
	I0919 18:07:36.159488   51705 main.go:141] libmachine: Parsing certificate...
	I0919 18:07:36.159535   51705 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-6042/.minikube/certs/cert.pem
	I0919 18:07:36.159554   51705 main.go:141] libmachine: Decoding PEM data...
	I0919 18:07:36.159565   51705 main.go:141] libmachine: Parsing certificate...
	I0919 18:07:36.159580   51705 main.go:141] libmachine: Running pre-create checks...
	I0919 18:07:36.159589   51705 main.go:141] libmachine: (auto-648984) Calling .PreCreateCheck
	I0919 18:07:36.159989   51705 main.go:141] libmachine: (auto-648984) Calling .GetConfigRaw
	I0919 18:07:36.160435   51705 main.go:141] libmachine: Creating machine...
	I0919 18:07:36.160454   51705 main.go:141] libmachine: (auto-648984) Calling .Create
	I0919 18:07:36.160583   51705 main.go:141] libmachine: (auto-648984) Creating KVM machine...
	I0919 18:07:36.161965   51705 main.go:141] libmachine: (auto-648984) DBG | found existing default KVM network
	I0919 18:07:36.163554   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:36.163402   51727 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147f10}
	I0919 18:07:36.168658   51705 main.go:141] libmachine: (auto-648984) DBG | trying to create private KVM network mk-auto-648984 192.168.39.0/24...
	I0919 18:07:36.253236   51705 main.go:141] libmachine: (auto-648984) DBG | private KVM network mk-auto-648984 192.168.39.0/24 created
	I0919 18:07:36.253282   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:36.253196   51727 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:07:36.253299   51705 main.go:141] libmachine: (auto-648984) Setting up store path in /home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984 ...
	I0919 18:07:36.253321   51705 main.go:141] libmachine: (auto-648984) Building disk image from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 18:07:36.253350   51705 main.go:141] libmachine: (auto-648984) Downloading /home/jenkins/minikube-integration/17240-6042/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 18:07:36.473898   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:36.473784   51727 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984/id_rsa...
	I0919 18:07:36.612566   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:36.612433   51727 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984/auto-648984.rawdisk...
	I0919 18:07:36.612594   51705 main.go:141] libmachine: (auto-648984) DBG | Writing magic tar header
	I0919 18:07:36.612612   51705 main.go:141] libmachine: (auto-648984) DBG | Writing SSH key tar header
	I0919 18:07:36.612625   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:36.612595   51727 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984 ...
	I0919 18:07:36.612737   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984
	I0919 18:07:36.612779   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube/machines
	I0919 18:07:36.612804   51705 main.go:141] libmachine: (auto-648984) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984 (perms=drwx------)
	I0919 18:07:36.612820   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 18:07:36.612836   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-6042
	I0919 18:07:36.612850   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 18:07:36.612866   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home/jenkins
	I0919 18:07:36.612876   51705 main.go:141] libmachine: (auto-648984) DBG | Checking permissions on dir: /home
	I0919 18:07:36.612922   51705 main.go:141] libmachine: (auto-648984) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube/machines (perms=drwxr-xr-x)
	I0919 18:07:36.612950   51705 main.go:141] libmachine: (auto-648984) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042/.minikube (perms=drwxr-xr-x)
	I0919 18:07:36.612971   51705 main.go:141] libmachine: (auto-648984) DBG | Skipping /home - not owner
	I0919 18:07:36.612984   51705 main.go:141] libmachine: (auto-648984) Setting executable bit set on /home/jenkins/minikube-integration/17240-6042 (perms=drwxrwxr-x)
	I0919 18:07:36.612996   51705 main.go:141] libmachine: (auto-648984) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 18:07:36.613003   51705 main.go:141] libmachine: (auto-648984) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 18:07:36.613013   51705 main.go:141] libmachine: (auto-648984) Creating domain...
	I0919 18:07:36.614884   51705 main.go:141] libmachine: (auto-648984) define libvirt domain using xml: 
	I0919 18:07:36.614918   51705 main.go:141] libmachine: (auto-648984) <domain type='kvm'>
	I0919 18:07:36.614929   51705 main.go:141] libmachine: (auto-648984)   <name>auto-648984</name>
	I0919 18:07:36.614946   51705 main.go:141] libmachine: (auto-648984)   <memory unit='MiB'>3072</memory>
	I0919 18:07:36.614958   51705 main.go:141] libmachine: (auto-648984)   <vcpu>2</vcpu>
	I0919 18:07:36.614966   51705 main.go:141] libmachine: (auto-648984)   <features>
	I0919 18:07:36.614979   51705 main.go:141] libmachine: (auto-648984)     <acpi/>
	I0919 18:07:36.614990   51705 main.go:141] libmachine: (auto-648984)     <apic/>
	I0919 18:07:36.615002   51705 main.go:141] libmachine: (auto-648984)     <pae/>
	I0919 18:07:36.615013   51705 main.go:141] libmachine: (auto-648984)     
	I0919 18:07:36.615024   51705 main.go:141] libmachine: (auto-648984)   </features>
	I0919 18:07:36.615032   51705 main.go:141] libmachine: (auto-648984)   <cpu mode='host-passthrough'>
	I0919 18:07:36.615064   51705 main.go:141] libmachine: (auto-648984)   
	I0919 18:07:36.615083   51705 main.go:141] libmachine: (auto-648984)   </cpu>
	I0919 18:07:36.615092   51705 main.go:141] libmachine: (auto-648984)   <os>
	I0919 18:07:36.615110   51705 main.go:141] libmachine: (auto-648984)     <type>hvm</type>
	I0919 18:07:36.615124   51705 main.go:141] libmachine: (auto-648984)     <boot dev='cdrom'/>
	I0919 18:07:36.615137   51705 main.go:141] libmachine: (auto-648984)     <boot dev='hd'/>
	I0919 18:07:36.615149   51705 main.go:141] libmachine: (auto-648984)     <bootmenu enable='no'/>
	I0919 18:07:36.615164   51705 main.go:141] libmachine: (auto-648984)   </os>
	I0919 18:07:36.615175   51705 main.go:141] libmachine: (auto-648984)   <devices>
	I0919 18:07:36.615185   51705 main.go:141] libmachine: (auto-648984)     <disk type='file' device='cdrom'>
	I0919 18:07:36.615194   51705 main.go:141] libmachine: (auto-648984)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984/boot2docker.iso'/>
	I0919 18:07:36.615202   51705 main.go:141] libmachine: (auto-648984)       <target dev='hdc' bus='scsi'/>
	I0919 18:07:36.615213   51705 main.go:141] libmachine: (auto-648984)       <readonly/>
	I0919 18:07:36.615224   51705 main.go:141] libmachine: (auto-648984)     </disk>
	I0919 18:07:36.615252   51705 main.go:141] libmachine: (auto-648984)     <disk type='file' device='disk'>
	I0919 18:07:36.615279   51705 main.go:141] libmachine: (auto-648984)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 18:07:36.615298   51705 main.go:141] libmachine: (auto-648984)       <source file='/home/jenkins/minikube-integration/17240-6042/.minikube/machines/auto-648984/auto-648984.rawdisk'/>
	I0919 18:07:36.615314   51705 main.go:141] libmachine: (auto-648984)       <target dev='hda' bus='virtio'/>
	I0919 18:07:36.615327   51705 main.go:141] libmachine: (auto-648984)     </disk>
	I0919 18:07:36.615340   51705 main.go:141] libmachine: (auto-648984)     <interface type='network'>
	I0919 18:07:36.615353   51705 main.go:141] libmachine: (auto-648984)       <source network='mk-auto-648984'/>
	I0919 18:07:36.615370   51705 main.go:141] libmachine: (auto-648984)       <model type='virtio'/>
	I0919 18:07:36.615384   51705 main.go:141] libmachine: (auto-648984)     </interface>
	I0919 18:07:36.615396   51705 main.go:141] libmachine: (auto-648984)     <interface type='network'>
	I0919 18:07:36.615409   51705 main.go:141] libmachine: (auto-648984)       <source network='default'/>
	I0919 18:07:36.615428   51705 main.go:141] libmachine: (auto-648984)       <model type='virtio'/>
	I0919 18:07:36.615442   51705 main.go:141] libmachine: (auto-648984)     </interface>
	I0919 18:07:36.615455   51705 main.go:141] libmachine: (auto-648984)     <serial type='pty'>
	I0919 18:07:36.615468   51705 main.go:141] libmachine: (auto-648984)       <target port='0'/>
	I0919 18:07:36.615479   51705 main.go:141] libmachine: (auto-648984)     </serial>
	I0919 18:07:36.615493   51705 main.go:141] libmachine: (auto-648984)     <console type='pty'>
	I0919 18:07:36.615506   51705 main.go:141] libmachine: (auto-648984)       <target type='serial' port='0'/>
	I0919 18:07:36.615518   51705 main.go:141] libmachine: (auto-648984)     </console>
	I0919 18:07:36.615535   51705 main.go:141] libmachine: (auto-648984)     <rng model='virtio'>
	I0919 18:07:36.615551   51705 main.go:141] libmachine: (auto-648984)       <backend model='random'>/dev/random</backend>
	I0919 18:07:36.615563   51705 main.go:141] libmachine: (auto-648984)     </rng>
	I0919 18:07:36.615576   51705 main.go:141] libmachine: (auto-648984)     
	I0919 18:07:36.615586   51705 main.go:141] libmachine: (auto-648984)     
	I0919 18:07:36.615598   51705 main.go:141] libmachine: (auto-648984)   </devices>
	I0919 18:07:36.615614   51705 main.go:141] libmachine: (auto-648984) </domain>
	I0919 18:07:36.615629   51705 main.go:141] libmachine: (auto-648984) 
	I0919 18:07:36.619923   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:fb:4a:7f in network default
	I0919 18:07:36.620636   51705 main.go:141] libmachine: (auto-648984) Ensuring networks are active...
	I0919 18:07:36.620665   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:36.621379   51705 main.go:141] libmachine: (auto-648984) Ensuring network default is active
	I0919 18:07:36.621654   51705 main.go:141] libmachine: (auto-648984) Ensuring network mk-auto-648984 is active
	I0919 18:07:36.622126   51705 main.go:141] libmachine: (auto-648984) Getting domain xml...
	I0919 18:07:36.622840   51705 main.go:141] libmachine: (auto-648984) Creating domain...
	I0919 18:07:37.906625   51705 main.go:141] libmachine: (auto-648984) Waiting to get IP...
	I0919 18:07:37.907604   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:37.908189   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:37.908258   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:37.908172   51727 retry.go:31] will retry after 311.2344ms: waiting for machine to come up
	I0919 18:07:38.220796   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:38.221377   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:38.221406   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:38.221332   51727 retry.go:31] will retry after 321.54496ms: waiting for machine to come up
	I0919 18:07:38.545082   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:38.545699   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:38.545729   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:38.545635   51727 retry.go:31] will retry after 316.025421ms: waiting for machine to come up
	I0919 18:07:38.863306   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:38.863904   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:38.863955   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:38.863860   51727 retry.go:31] will retry after 412.920478ms: waiting for machine to come up
	I0919 18:07:39.278701   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:39.279248   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:39.279308   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:39.279221   51727 retry.go:31] will retry after 596.900362ms: waiting for machine to come up
	I0919 18:07:39.878235   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:39.878769   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:39.878806   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:39.878725   51727 retry.go:31] will retry after 756.050886ms: waiting for machine to come up
	I0919 18:07:40.636800   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:40.637317   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:40.637349   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:40.637270   51727 retry.go:31] will retry after 807.154399ms: waiting for machine to come up
	I0919 18:07:41.446590   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:41.447003   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:41.447038   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:41.446986   51727 retry.go:31] will retry after 1.011974846s: waiting for machine to come up
	I0919 18:07:42.460343   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:42.460913   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:42.460951   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:42.460870   51727 retry.go:31] will retry after 1.626206398s: waiting for machine to come up
	I0919 18:07:44.089645   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:44.090185   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:44.090273   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:44.090228   51727 retry.go:31] will retry after 1.425826792s: waiting for machine to come up
	I0919 18:07:45.517592   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:45.518048   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:45.518075   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:45.518005   51727 retry.go:31] will retry after 2.79651477s: waiting for machine to come up
	I0919 18:07:46.290917   51238 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 18:07:46.291004   51238 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 18:07:46.291096   51238 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:07:46.291207   51238 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:07:46.291322   51238 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 18:07:46.291395   51238 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:07:46.293111   51238 out.go:204]   - Generating certificates and keys ...
	I0919 18:07:46.293207   51238 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 18:07:46.293300   51238 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 18:07:46.293392   51238 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:07:46.293456   51238 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:07:46.293532   51238 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:07:46.293590   51238 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 18:07:46.293652   51238 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 18:07:46.293791   51238 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-199016] and IPs [192.168.72.220 127.0.0.1 ::1]
	I0919 18:07:46.293858   51238 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 18:07:46.294015   51238 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-199016] and IPs [192.168.72.220 127.0.0.1 ::1]
	I0919 18:07:46.294089   51238 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:07:46.294160   51238 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:07:46.294214   51238 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 18:07:46.294280   51238 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:07:46.294339   51238 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:07:46.294402   51238 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:07:46.294475   51238 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:07:46.294543   51238 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:07:46.294637   51238 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:07:46.294727   51238 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:07:46.296261   51238 out.go:204]   - Booting up control plane ...
	I0919 18:07:46.296366   51238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:07:46.296480   51238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:07:46.296561   51238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:07:46.296708   51238 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:07:46.296838   51238 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:07:46.296893   51238 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 18:07:46.297079   51238 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 18:07:46.297172   51238 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003385 seconds
	I0919 18:07:46.297301   51238 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:07:46.297470   51238 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:07:46.297555   51238 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:07:46.297781   51238 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-199016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:07:46.297847   51238 kubeadm.go:322] [bootstrap-token] Using token: x7utxn.vkgwx43gxrxtq9f4
	I0919 18:07:46.299381   51238 out.go:204]   - Configuring RBAC rules ...
	I0919 18:07:46.299506   51238 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:07:46.299605   51238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:07:46.299781   51238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:07:46.299946   51238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:07:46.300070   51238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:07:46.300177   51238 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:07:46.300356   51238 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:07:46.300427   51238 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 18:07:46.300482   51238 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 18:07:46.300492   51238 kubeadm.go:322] 
	I0919 18:07:46.300561   51238 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 18:07:46.300575   51238 kubeadm.go:322] 
	I0919 18:07:46.300668   51238 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 18:07:46.300677   51238 kubeadm.go:322] 
	I0919 18:07:46.300706   51238 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 18:07:46.300784   51238 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:07:46.300852   51238 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:07:46.300860   51238 kubeadm.go:322] 
	I0919 18:07:46.300925   51238 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 18:07:46.300933   51238 kubeadm.go:322] 
	I0919 18:07:46.301014   51238 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:07:46.301031   51238 kubeadm.go:322] 
	I0919 18:07:46.301090   51238 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 18:07:46.301203   51238 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:07:46.301283   51238 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:07:46.301304   51238 kubeadm.go:322] 
	I0919 18:07:46.301396   51238 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:07:46.301491   51238 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 18:07:46.301507   51238 kubeadm.go:322] 
	I0919 18:07:46.301620   51238 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x7utxn.vkgwx43gxrxtq9f4 \
	I0919 18:07:46.301751   51238 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 \
	I0919 18:07:46.301780   51238 kubeadm.go:322] 	--control-plane 
	I0919 18:07:46.301786   51238 kubeadm.go:322] 
	I0919 18:07:46.301886   51238 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:07:46.301895   51238 kubeadm.go:322] 
	I0919 18:07:46.301985   51238 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x7utxn.vkgwx43gxrxtq9f4 \
	I0919 18:07:46.302112   51238 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f2604ca04885b266ca3e8571482f33090dfe4d29eb194fffcb041730b5d5ec95 
	I0919 18:07:46.302128   51238 cni.go:84] Creating CNI manager for ""
	I0919 18:07:46.302136   51238 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 18:07:46.303983   51238 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 18:07:46.305467   51238 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 18:07:46.326309   51238 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 18:07:46.378308   51238 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:07:46.378407   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:46.378444   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=newest-cni-199016 minikube.k8s.io/updated_at=2023_09_19T18_07_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:46.428456   51238 ops.go:34] apiserver oom_adj: -16
	I0919 18:07:46.802056   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:46.898089   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:47.496369   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:47.996659   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:48.496813   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:48.996151   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:49.496541   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:49.996772   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:50.496462   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:48.317756   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:48.318293   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:48.318317   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:48.318253   51727 retry.go:31] will retry after 3.144606087s: waiting for machine to come up
	I0919 18:07:50.996349   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:51.496530   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:51.996973   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:52.496821   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:52.996807   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:53.496857   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:53.996737   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:54.496920   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:54.996355   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:55.497001   51238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:07:51.464530   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:51.465132   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:51.465183   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:51.465071   51727 retry.go:31] will retry after 3.661472224s: waiting for machine to come up
	I0919 18:07:55.129430   51705 main.go:141] libmachine: (auto-648984) DBG | domain auto-648984 has defined MAC address 52:54:00:8e:ee:34 in network mk-auto-648984
	I0919 18:07:55.129833   51705 main.go:141] libmachine: (auto-648984) DBG | unable to find current IP address of domain auto-648984 in network mk-auto-648984
	I0919 18:07:55.129865   51705 main.go:141] libmachine: (auto-648984) DBG | I0919 18:07:55.129782   51727 retry.go:31] will retry after 4.790413598s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-09-19 17:47:38 UTC, ends at Tue 2023-09-19 18:07:57 UTC. --
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.515009114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146877514985203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cab832ce-e3a5-4900-96d6-52ce0aa7b150 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.515784207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=937cdb5c-3f8a-4cd3-9168-2d1d07949351 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.515887069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=937cdb5c-3f8a-4cd3-9168-2d1d07949351 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.516590817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=937cdb5c-3f8a-4cd3-9168-2d1d07949351 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.563163746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=23ebc776-40d5-4f1c-be21-3f4facff0163 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.563277105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=23ebc776-40d5-4f1c-be21-3f4facff0163 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.565733611Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1b732342-e261-45a8-bdae-edf3a79fe462 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.566455266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146877566434488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1b732342-e261-45a8-bdae-edf3a79fe462 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.568026886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=38525944-0018-43fb-bcfb-dff09ed38e27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.568132080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=38525944-0018-43fb-bcfb-dff09ed38e27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.568429623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=38525944-0018-43fb-bcfb-dff09ed38e27 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.617081653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8905af4c-aac3-4fc2-8b43-70accb45e05b name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.617201890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8905af4c-aac3-4fc2-8b43-70accb45e05b name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.619059577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a4c463f7-f24a-49e3-8e37-d4f4ca2a4f49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.619933096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146877619913103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a4c463f7-f24a-49e3-8e37-d4f4ca2a4f49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.621160287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=20cd40a5-0c58-4826-81dd-867538d358be name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.621396861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=20cd40a5-0c58-4826-81dd-867538d358be name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.621687232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=20cd40a5-0c58-4826-81dd-867538d358be name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.657739964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c1bfc25e-5c25-4db0-b0ef-879da6ce8f55 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.657879039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c1bfc25e-5c25-4db0-b0ef-879da6ce8f55 name=/runtime.v1.RuntimeService/Version
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.665585081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c4ed9698-95f1-44c7-bf05-d8b3cc8c1310 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.666147931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1695146877666123162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c4ed9698-95f1-44c7-bf05-d8b3cc8c1310 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.666974450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8007c427-3ce1-4863-9069-5835ca023f17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.667059684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8007c427-3ce1-4863-9069-5835ca023f17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 18:07:57 embed-certs-415155 crio[726]: time="2023-09-19 18:07:57.667393904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639,PodSandboxId:8cf13e6e546db1173a098b29385f41d69dced0ff09c55ff435cc067f34848e08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1695145990203371481,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83e9eb53-dd92-4b84-a787-82bea5449cd2,},Annotations:map[string]string{io.kubernetes.container.hash: 23c0ca2d,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3,PodSandboxId:f17fe50a94dc6c5dac44471ef6b34fe943ebe3a6ca715ed0d622c720ab4a7805,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1695145989875634329,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b75j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7be05aae-86ca-4640-a0f3-6518e7896711,},Annotations:map[string]string{io.kubernetes.container.hash: db3551f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac,PodSandboxId:d2c40bdab0a72b127710c5186de27fc536d17646985ab09c498c946895cb66ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1695145988936472478,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2dbbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93175ebd-b717-4c98-a56b-aca1404ac8bd,},Annotations:map[string]string{io.kubernetes.container.hash: 64c86263,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd,PodSandboxId:81c3c3b08917e103bc2df6b1a6317fd66a289c93d08a4dc203866f8fbba50df4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1695145965935976289,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 1670db7e0aab166d0e691d553d87d094,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6,PodSandboxId:35138525cd8bdd437c49edb71ced54a9900ef8a2fad17a50cfc47e88a467c919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1695145965387167371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3620b65e9d6874920789e5c75788a548,},Annotations:
map[string]string{io.kubernetes.container.hash: c42e2ce8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0,PodSandboxId:2458503066ab0f24bc9e32e728dd83b7667e465762c4b17d8dfacf8726161b83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1695145965406467349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45da9a46e3aeecd9e79ac
27f306ed8bb,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed,PodSandboxId:f665a7ec3953a0cfc044e610ae021c158ed781dae95541093bfc767156d0c930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1695145965325217615,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-415155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9653fce8177804cb18eaf9a2711eec1
4,},Annotations:map[string]string{io.kubernetes.container.hash: d6b02d78,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8007c427-3ce1-4863-9069-5835ca023f17 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	261e128aa6ed6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   8cf13e6e546db       storage-provisioner
	2001e36377828       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   14 minutes ago      Running             kube-proxy                0                   f17fe50a94dc6       kube-proxy-b75j2
	25abbbb219d99       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   d2c40bdab0a72       coredns-5dd5756b68-2dbbk
	8a61c6dfc47ea       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   15 minutes ago      Running             kube-scheduler            2                   81c3c3b08917e       kube-scheduler-embed-certs-415155
	6bb0d00ed49b6       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   15 minutes ago      Running             kube-controller-manager   2                   2458503066ab0       kube-controller-manager-embed-certs-415155
	c5a6bd76fad6f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   35138525cd8bd       etcd-embed-certs-415155
	2af4bd79127b0       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   15 minutes ago      Running             kube-apiserver            2                   f665a7ec3953a       kube-apiserver-embed-certs-415155
	
	* 
	* ==> coredns [25abbbb219d9920f57b364cda752b7e22ae320cb37d2bb700b54f1576c16afac] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47583 - 18749 "HINFO IN 2139454996083767068.9066230539177473847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022823994s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-415155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-415155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=embed-certs-415155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_52_52_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:52:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-415155
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 18:07:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 18:03:24 +0000   Tue, 19 Sep 2023 17:52:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 18:03:24 +0000   Tue, 19 Sep 2023 17:52:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 18:03:24 +0000   Tue, 19 Sep 2023 17:52:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 18:03:24 +0000   Tue, 19 Sep 2023 17:53:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.6
	  Hostname:    embed-certs-415155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 62d0f53d77e049afa3581ea6927d1068
	  System UUID:                62d0f53d-77e0-49af-a358-1ea6927d1068
	  Boot ID:                    43753850-fc8d-4bdd-a3d9-720d7f34ce86
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2dbbk                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-415155                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-415155             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-415155    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-b75j2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-415155             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-kdxsz               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-415155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-415155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-415155 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-415155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-415155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-415155 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-415155 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node embed-certs-415155 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-415155 event: Registered Node embed-certs-415155 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep19 17:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073164] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.488255] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.452779] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149057] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.509844] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.298094] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.159057] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.165127] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.117036] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.273733] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Sep19 17:48] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[ +20.060788] kauditd_printk_skb: 29 callbacks suppressed
	[Sep19 17:52] systemd-fstab-generator[3544]: Ignoring "noauto" for root device
	[  +8.762331] systemd-fstab-generator[3873]: Ignoring "noauto" for root device
	[Sep19 17:53] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.535789] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [c5a6bd76fad6f2df61917c4aa0953271d98039445970109fd53ca99aecc2ffc6] <==
	* {"level":"info","ts":"2023-09-19T17:52:47.545632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2b2e43cf24bcd38c became leader at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:47.545657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2b2e43cf24bcd38c elected leader 2b2e43cf24bcd38c at term 2"}
	{"level":"info","ts":"2023-09-19T17:52:47.552409Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.556567Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2b2e43cf24bcd38c","local-member-attributes":"{Name:embed-certs-415155 ClientURLs:[https://192.168.50.6:2379]}","request-path":"/0/members/2b2e43cf24bcd38c/attributes","cluster-id":"2873be035e30d2ee","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:52:47.556833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:47.560274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:47.560346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:52:47.562345Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2873be035e30d2ee","local-member-id":"2b2e43cf24bcd38c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.562539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.562586Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:52:47.562618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:52:47.563564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.6:2379"}
	{"level":"info","ts":"2023-09-19T17:52:47.575847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-09-19T17:54:35.321844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.699941ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15243711321133089527 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:561 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-19T17:54:35.322357Z","caller":"traceutil/trace.go:171","msg":"trace[1501878281] transaction","detail":"{read_only:false; response_revision:563; number_of_response:1; }","duration":"376.017859ms","start":"2023-09-19T17:54:34.946132Z","end":"2023-09-19T17:54:35.322149Z","steps":["trace[1501878281] 'process raft request'  (duration: 119.844165ms)","trace[1501878281] 'compare'  (duration: 254.574507ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-19T17:54:35.322442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:54:34.946114Z","time spent":"376.286469ms","remote":"127.0.0.1:54616","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:561 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-09-19T17:54:37.263749Z","caller":"traceutil/trace.go:171","msg":"trace[104613979] transaction","detail":"{read_only:false; response_revision:564; number_of_response:1; }","duration":"360.71259ms","start":"2023-09-19T17:54:36.903012Z","end":"2023-09-19T17:54:37.263725Z","steps":["trace[104613979] 'process raft request'  (duration: 360.496437ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T17:54:37.264086Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-19T17:54:36.902992Z","time spent":"360.942225ms","remote":"127.0.0.1:54594","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":804,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kdxsz.17865e4bfa56de80\" mod_revision:513 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kdxsz.17865e4bfa56de80\" value_size:709 lease:6020339284278313530 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-kdxsz.17865e4bfa56de80\" > >"}
	{"level":"info","ts":"2023-09-19T18:02:47.601705Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":725}
	{"level":"info","ts":"2023-09-19T18:02:47.604853Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":725,"took":"2.244132ms","hash":1676199886}
	{"level":"info","ts":"2023-09-19T18:02:47.604978Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1676199886,"revision":725,"compact-revision":-1}
	{"level":"info","ts":"2023-09-19T18:07:33.970813Z","caller":"traceutil/trace.go:171","msg":"trace[1656939100] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"141.041771ms","start":"2023-09-19T18:07:33.829724Z","end":"2023-09-19T18:07:33.970765Z","steps":["trace[1656939100] 'process raft request'  (duration: 140.886146ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-19T18:07:47.608898Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
	{"level":"info","ts":"2023-09-19T18:07:47.611408Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":968,"took":"1.916179ms","hash":2721924735}
	{"level":"info","ts":"2023-09-19T18:07:47.611516Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2721924735,"revision":968,"compact-revision":725}
	
	* 
	* ==> kernel <==
	*  18:07:58 up 20 min,  0 users,  load average: 0.05, 0.11, 0.14
	Linux embed-certs-415155 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2af4bd79127b0dda45c8ac7094043864387b780b6561f952ba6c83815dc5a1ed] <==
	* E0919 18:03:50.227639       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:03:50.227676       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:04:49.107485       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:05:49.106720       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:05:50.227001       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:05:50.227075       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:05:50.227086       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:05:50.228139       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:05:50.228343       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:05:50.228380       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:06:49.106649       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:07:49.106962       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:07:49.233354       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:49.233513       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:07:49.234021       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0919 18:07:50.234672       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:50.234787       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 18:07:50.234817       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:07:50.234716       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 18:07:50.234947       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 18:07:50.235864       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [6bb0d00ed49b65f339b6bfee916bbb311142cbbf7298c6a28f16bef4c1352ac0] <==
	* I0919 18:02:05.865072       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:02:35.375432       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:02:35.873922       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:03:05.381909       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:03:05.887126       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:03:35.388657       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:03:35.897538       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:04:05.394692       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:04:05.906029       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 18:04:11.917699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="265.154µs"
	I0919 18:04:22.923854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="894.286µs"
	E0919 18:04:35.401534       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:04:35.917929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:05:05.410069       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:05:05.927073       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:05:35.417445       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:05:35.937051       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:06:05.425128       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:06:05.946672       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:06:35.432153       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:06:35.955436       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:07:05.442149       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:07:05.965681       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0919 18:07:35.449995       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 18:07:35.975318       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2001e3637782862f2c38fdc40bc62dab7a64a1d40bc166e2238ac029c21730f3] <==
	* I0919 17:53:10.309577       1 server_others.go:69] "Using iptables proxy"
	I0919 17:53:10.330935       1 node.go:141] Successfully retrieved node IP: 192.168.50.6
	I0919 17:53:10.427565       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 17:53:10.432374       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 17:53:10.439685       1 server_others.go:152] "Using iptables Proxier"
	I0919 17:53:10.439759       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 17:53:10.439928       1 server.go:846] "Version info" version="v1.28.2"
	I0919 17:53:10.439965       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 17:53:10.441022       1 config.go:188] "Starting service config controller"
	I0919 17:53:10.441093       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 17:53:10.441119       1 config.go:97] "Starting endpoint slice config controller"
	I0919 17:53:10.441123       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 17:53:10.446868       1 config.go:315] "Starting node config controller"
	I0919 17:53:10.446995       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 17:53:10.541497       1 shared_informer.go:318] Caches are synced for service config
	I0919 17:53:10.541615       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 17:53:10.547430       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8a61c6dfc47eac539910238d595920a9614424c4d6f869bd936ce1fe62fa45bd] <==
	* W0919 17:52:49.302528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:49.302535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:49.302577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:52:49.302585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 17:52:49.302626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:52:49.302636       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:52:49.302688       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:49.302697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.128885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:50.128942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.160617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:52:50.160766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 17:52:50.172754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 17:52:50.172855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 17:52:50.181368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:52:50.181466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 17:52:50.231490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:50.231801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.398956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 17:52:50.399071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 17:52:50.445292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:52:50.445358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 17:52:50.630428       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 17:52:50.630483       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0919 17:52:52.475103       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:47:38 UTC, ends at Tue 2023-09-19 18:07:58 UTC. --
	Sep 19 18:05:23 embed-certs-415155 kubelet[3880]: E0919 18:05:23.894981    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:05:36 embed-certs-415155 kubelet[3880]: E0919 18:05:36.895842    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:05:50 embed-certs-415155 kubelet[3880]: E0919 18:05:50.896055    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:05:53 embed-certs-415155 kubelet[3880]: E0919 18:05:53.029485    3880 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:05:53 embed-certs-415155 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:05:53 embed-certs-415155 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:05:53 embed-certs-415155 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:06:02 embed-certs-415155 kubelet[3880]: E0919 18:06:02.895731    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:06:16 embed-certs-415155 kubelet[3880]: E0919 18:06:16.897505    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:06:31 embed-certs-415155 kubelet[3880]: E0919 18:06:31.895687    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:06:45 embed-certs-415155 kubelet[3880]: E0919 18:06:45.894918    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:06:53 embed-certs-415155 kubelet[3880]: E0919 18:06:53.034047    3880 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:06:53 embed-certs-415155 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:06:53 embed-certs-415155 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:06:53 embed-certs-415155 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:06:56 embed-certs-415155 kubelet[3880]: E0919 18:06:56.896928    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:07:08 embed-certs-415155 kubelet[3880]: E0919 18:07:08.896346    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:07:19 embed-certs-415155 kubelet[3880]: E0919 18:07:19.896136    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:07:31 embed-certs-415155 kubelet[3880]: E0919 18:07:31.895471    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:07:44 embed-certs-415155 kubelet[3880]: E0919 18:07:44.897411    3880 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kdxsz" podUID="1588f0a7-18ae-402b-8916-e3a6423e9e15"
	Sep 19 18:07:53 embed-certs-415155 kubelet[3880]: E0919 18:07:53.028974    3880 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 18:07:53 embed-certs-415155 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 18:07:53 embed-certs-415155 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 18:07:53 embed-certs-415155 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 18:07:53 embed-certs-415155 kubelet[3880]: E0919 18:07:53.040924    3880 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	
	* 
	* ==> storage-provisioner [261e128aa6ed6a2746ae461cd20f2d01140f64257db650d74e1927755a150639] <==
	* I0919 17:53:10.375641       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 17:53:10.391596       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 17:53:10.391722       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 17:53:10.411937       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 17:53:10.412509       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95579dec-4f0b-4dec-9dc5-d80595c653f2", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-415155_63a03324-0e0c-4655-92bd-8d111ef4375e became leader
	I0919 17:53:10.412571       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-415155_63a03324-0e0c-4655-92bd-8d111ef4375e!
	I0919 17:53:10.513650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-415155_63a03324-0e0c-4655-92bd-8d111ef4375e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-415155 -n embed-certs-415155
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-415155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kdxsz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-415155 describe pod metrics-server-57f55c9bc5-kdxsz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-415155 describe pod metrics-server-57f55c9bc5-kdxsz: exit status 1 (62.979371ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kdxsz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-415155 describe pod metrics-server-57f55c9bc5-kdxsz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (343.48s)

                                                
                                    

Test pass (224/287)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 59.9
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.2/json-events 20.99
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.53
20 TestOffline 111.49
22 TestAddons/Setup 152.29
24 TestAddons/parallel/Registry 22.36
26 TestAddons/parallel/InspektorGadget 11.27
27 TestAddons/parallel/MetricsServer 6.15
28 TestAddons/parallel/HelmTiller 25.57
30 TestAddons/parallel/CSI 63.47
31 TestAddons/parallel/Headlamp 17.02
32 TestAddons/parallel/CloudSpanner 6.33
35 TestAddons/serial/GCPAuth/Namespaces 0.12
37 TestCertOptions 66.59
38 TestCertExpiration 303.79
40 TestForceSystemdFlag 60.8
41 TestForceSystemdEnv 84.92
43 TestKVMDriverInstallOrUpdate 4.11
47 TestErrorSpam/setup 46.64
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.74
50 TestErrorSpam/pause 1.53
51 TestErrorSpam/unpause 1.73
52 TestErrorSpam/stop 2.19
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 88.67
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 31.15
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
64 TestFunctional/serial/CacheCmd/cache/add_local 2.23
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.09
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 37.73
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.51
75 TestFunctional/serial/LogsFileCmd 1.46
76 TestFunctional/serial/InvalidService 4.8
78 TestFunctional/parallel/ConfigCmd 0.28
79 TestFunctional/parallel/DashboardCmd 23.54
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 1.01
86 TestFunctional/parallel/ServiceCmdConnect 10.66
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 59.37
90 TestFunctional/parallel/SSHCmd 0.45
91 TestFunctional/parallel/CpCmd 0.9
92 TestFunctional/parallel/MySQL 40.2
93 TestFunctional/parallel/FileSync 0.19
94 TestFunctional/parallel/CertSync 1.41
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
102 TestFunctional/parallel/License 0.88
103 TestFunctional/parallel/ServiceCmd/DeployApp 13.22
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.27
105 TestFunctional/parallel/ProfileCmd/profile_list 0.28
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
107 TestFunctional/parallel/MountCmd/any-port 11.72
108 TestFunctional/parallel/MountCmd/specific-port 1.94
109 TestFunctional/parallel/ServiceCmd/List 0.51
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
112 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
113 TestFunctional/parallel/ServiceCmd/Format 0.43
114 TestFunctional/parallel/ServiceCmd/URL 0.33
115 TestFunctional/parallel/Version/short 0.04
116 TestFunctional/parallel/Version/components 1.28
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.78
122 TestFunctional/parallel/ImageCommands/Setup 2.31
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.14
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.55
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.12
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.16
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.96
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.54
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 120.17
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.43
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
155 TestJSONOutput/start/Command 99.52
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.67
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.64
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.09
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.17
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 94.49
187 TestMountStart/serial/StartWithMountFirst 29.95
188 TestMountStart/serial/VerifyMountFirst 0.36
189 TestMountStart/serial/StartWithMountSecond 28.5
190 TestMountStart/serial/VerifyMountSecond 0.37
191 TestMountStart/serial/DeleteFirst 0.68
192 TestMountStart/serial/VerifyMountPostDelete 0.37
193 TestMountStart/serial/Stop 1.13
194 TestMountStart/serial/RestartStopped 27.02
195 TestMountStart/serial/VerifyMountPostStop 0.37
198 TestMultiNode/serial/FreshStart2Nodes 110.79
199 TestMultiNode/serial/DeployApp2Nodes 6.12
201 TestMultiNode/serial/AddNode 43.01
202 TestMultiNode/serial/ProfileList 0.21
203 TestMultiNode/serial/CopyFile 7.04
204 TestMultiNode/serial/StopNode 2.89
205 TestMultiNode/serial/StartAfterStop 31.68
207 TestMultiNode/serial/DeleteNode 1.71
209 TestMultiNode/serial/RestartMultiNode 447.5
210 TestMultiNode/serial/ValidateNameConflict 49.65
217 TestScheduledStopUnix 118.27
223 TestKubernetesUpgrade 180.7
225 TestStoppedBinaryUpgrade/Setup 2.03
235 TestPause/serial/Start 119.56
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
238 TestNoKubernetes/serial/StartWithK8s 49.8
239 TestNoKubernetes/serial/StartWithStopK8s 9.55
240 TestNoKubernetes/serial/Start 28.59
242 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
243 TestNoKubernetes/serial/ProfileList 0.58
244 TestNoKubernetes/serial/Stop 1.13
245 TestNoKubernetes/serial/StartNoArgs 92.16
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.39
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
255 TestNetworkPlugins/group/false 2.71
262 TestStartStop/group/no-preload/serial/FirstStart 148.34
264 TestStartStop/group/embed-certs/serial/FirstStart 89.62
265 TestStartStop/group/embed-certs/serial/DeployApp 10.61
266 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
268 TestStartStop/group/no-preload/serial/DeployApp 11.51
270 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.24
271 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
273 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
274 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
277 TestStartStop/group/embed-certs/serial/SecondStart 662.41
279 TestStartStop/group/no-preload/serial/SecondStart 598.79
281 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 507.44
282 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.85
286 TestStartStop/group/old-k8s-version/serial/SecondStart 725.96
295 TestStartStop/group/newest-cni/serial/FirstStart 61.61
296 TestNetworkPlugins/group/auto/Start 69.87
297 TestNetworkPlugins/group/kindnet/Start 77.61
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.53
300 TestStartStop/group/newest-cni/serial/Stop 10.4
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/newest-cni/serial/SecondStart 72.72
303 TestNetworkPlugins/group/auto/KubeletFlags 0.24
304 TestNetworkPlugins/group/auto/NetCatPod 12.46
305 TestNetworkPlugins/group/auto/DNS 0.22
306 TestNetworkPlugins/group/auto/Localhost 0.19
307 TestNetworkPlugins/group/auto/HairPin 0.18
308 TestNetworkPlugins/group/calico/Start 103.52
309 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
311 TestNetworkPlugins/group/kindnet/NetCatPod 13.48
312 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
315 TestStartStop/group/newest-cni/serial/Pause 3.18
316 TestNetworkPlugins/group/custom-flannel/Start 100.51
317 TestNetworkPlugins/group/kindnet/DNS 0.16
318 TestNetworkPlugins/group/kindnet/Localhost 0.19
319 TestNetworkPlugins/group/kindnet/HairPin 0.16
320 TestNetworkPlugins/group/enable-default-cni/Start 124.18
321 TestNetworkPlugins/group/flannel/Start 128.2
322 TestNetworkPlugins/group/calico/ControllerPod 5.03
323 TestNetworkPlugins/group/calico/KubeletFlags 0.2
324 TestNetworkPlugins/group/calico/NetCatPod 12.42
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.44
327 TestNetworkPlugins/group/calico/DNS 0.26
328 TestNetworkPlugins/group/calico/Localhost 0.2
329 TestNetworkPlugins/group/calico/HairPin 0.21
330 TestNetworkPlugins/group/custom-flannel/DNS 0.22
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
333 TestNetworkPlugins/group/bridge/Start 100.27
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.44
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
339 TestNetworkPlugins/group/flannel/ControllerPod 5.02
340 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
341 TestNetworkPlugins/group/flannel/NetCatPod 11.5
342 TestNetworkPlugins/group/flannel/DNS 0.19
343 TestNetworkPlugins/group/flannel/Localhost 0.15
344 TestNetworkPlugins/group/flannel/HairPin 0.14
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
346 TestNetworkPlugins/group/bridge/NetCatPod 12.38
347 TestNetworkPlugins/group/bridge/DNS 0.17
348 TestNetworkPlugins/group/bridge/Localhost 0.15
349 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (59.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-698254 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-698254 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (59.901169087s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (59.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-698254
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-698254: exit status 85 (52.704079ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:34 UTC |          |
	|         | -p download-only-698254        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:34:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:34:27.130819   13250 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:34:27.131087   13250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:34:27.131097   13250 out.go:309] Setting ErrFile to fd 2...
	I0919 16:34:27.131102   13250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:34:27.131290   13250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	W0919 16:34:27.131402   13250 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17240-6042/.minikube/config/config.json: open /home/jenkins/minikube-integration/17240-6042/.minikube/config/config.json: no such file or directory
	I0919 16:34:27.131971   13250 out.go:303] Setting JSON to true
	I0919 16:34:27.132850   13250 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1017,"bootTime":1695140250,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:34:27.132909   13250 start.go:138] virtualization: kvm guest
	I0919 16:34:27.135185   13250 out.go:97] [download-only-698254] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:34:27.136838   13250 out.go:169] MINIKUBE_LOCATION=17240
	W0919 16:34:27.135304   13250 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 16:34:27.135373   13250 notify.go:220] Checking for updates...
	I0919 16:34:27.139794   13250 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:34:27.141132   13250 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:34:27.142447   13250 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:34:27.143734   13250 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 16:34:27.146270   13250 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 16:34:27.146507   13250 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:34:27.663843   13250 out.go:97] Using the kvm2 driver based on user configuration
	I0919 16:34:27.663892   13250 start.go:298] selected driver: kvm2
	I0919 16:34:27.663901   13250 start.go:902] validating driver "kvm2" against <nil>
	I0919 16:34:27.664216   13250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:34:27.664340   13250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:34:27.677823   13250 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:34:27.677874   13250 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 16:34:27.678302   13250 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0919 16:34:27.678446   13250 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 16:34:27.678475   13250 cni.go:84] Creating CNI manager for ""
	I0919 16:34:27.678484   13250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:34:27.678490   13250 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 16:34:27.678496   13250 start_flags.go:321] config:
	{Name:download-only-698254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-698254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:34:27.678687   13250 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:34:27.680822   13250 out.go:97] Downloading VM boot image ...
	I0919 16:34:27.680928   13250 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:34:36.989726   13250 out.go:97] Starting control plane node download-only-698254 in cluster download-only-698254
	I0919 16:34:36.989741   13250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 16:34:37.102352   13250 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 16:34:37.102379   13250 cache.go:57] Caching tarball of preloaded images
	I0919 16:34:37.102520   13250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 16:34:37.104884   13250 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0919 16:34:37.104897   13250 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:34:37.216552   13250 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0919 16:34:58.638107   13250 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:34:58.638185   13250 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:34:59.535344   13250 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0919 16:34:59.535677   13250 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/download-only-698254/config.json ...
	I0919 16:34:59.535706   13250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/download-only-698254/config.json: {Name:mk23df3ef7c240c5e586a542142ebd33931c30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:34:59.535848   13250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0919 16:34:59.536003   13250 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-698254"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (20.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-698254 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-698254 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.986336842s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (20.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-698254
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-698254: exit status 85 (54.523536ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:34 UTC |          |
	|         | -p download-only-698254        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-698254 | jenkins | v1.31.2 | 19 Sep 23 16:35 UTC |          |
	|         | -p download-only-698254        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:35:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:35:27.087603   13434 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:35:27.087845   13434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:35:27.087854   13434 out.go:309] Setting ErrFile to fd 2...
	I0919 16:35:27.087859   13434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:35:27.088053   13434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	W0919 16:35:27.088189   13434 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17240-6042/.minikube/config/config.json: open /home/jenkins/minikube-integration/17240-6042/.minikube/config/config.json: no such file or directory
	I0919 16:35:27.088634   13434 out.go:303] Setting JSON to true
	I0919 16:35:27.089454   13434 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1077,"bootTime":1695140250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:35:27.089504   13434 start.go:138] virtualization: kvm guest
	I0919 16:35:27.091537   13434 out.go:97] [download-only-698254] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:35:27.093169   13434 out.go:169] MINIKUBE_LOCATION=17240
	I0919 16:35:27.091689   13434 notify.go:220] Checking for updates...
	I0919 16:35:27.095826   13434 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:35:27.097319   13434 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:35:27.098718   13434 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:35:27.100092   13434 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 16:35:27.103331   13434 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 16:35:27.103747   13434 config.go:182] Loaded profile config "download-only-698254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0919 16:35:27.103796   13434 start.go:810] api.Load failed for download-only-698254: filestore "download-only-698254": Docker machine "download-only-698254" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0919 16:35:27.103894   13434 driver.go:373] Setting default libvirt URI to qemu:///system
	W0919 16:35:27.103958   13434 start.go:810] api.Load failed for download-only-698254: filestore "download-only-698254": Docker machine "download-only-698254" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0919 16:35:27.134990   13434 out.go:97] Using the kvm2 driver based on existing profile
	I0919 16:35:27.135025   13434 start.go:298] selected driver: kvm2
	I0919 16:35:27.135033   13434 start.go:902] validating driver "kvm2" against &{Name:download-only-698254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-698254 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:35:27.135461   13434 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:35:27.135540   13434 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-6042/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:35:27.149741   13434 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:35:27.150434   13434 cni.go:84] Creating CNI manager for ""
	I0919 16:35:27.150458   13434 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 16:35:27.150472   13434 start_flags.go:321] config:
	{Name:download-only-698254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-698254 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:35:27.150648   13434 iso.go:125] acquiring lock: {Name:mk514aa5854fd317cc6dd61e19c3171ab4ae49dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:35:27.152382   13434 out.go:97] Starting control plane node download-only-698254 in cluster download-only-698254
	I0919 16:35:27.152398   13434 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:35:27.276081   13434 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 16:35:27.276123   13434 cache.go:57] Caching tarball of preloaded images
	I0919 16:35:27.276270   13434 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:35:27.278336   13434 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0919 16:35:27.278351   13434 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:35:27.395068   13434 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:63ef340a9dae90462e676325aa502af3 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0919 16:35:46.108544   13434 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:35:46.108630   13434 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17240-6042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I0919 16:35:47.036863   13434 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0919 16:35:47.036983   13434 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/download-only-698254/config.json ...
	I0919 16:35:47.037184   13434 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0919 16:35:47.037345   13434 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17240-6042/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-698254"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-698254
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-912336 --alsologtostderr --binary-mirror http://127.0.0.1:32843 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-912336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-912336
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (111.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-259423 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-259423 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m50.282674387s)
helpers_test.go:175: Cleaning up "offline-crio-259423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-259423
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-259423: (1.211740196s)
--- PASS: TestOffline (111.49s)

                                                
                                    
x
+
TestAddons/Setup (152.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-897988 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-897988 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.287564015s)
--- PASS: TestAddons/Setup (152.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 25.869214ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-r8jt6" [f55a59ff-10e0-4243-a3ae-4c53d0872417] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019703902s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kw8ch" [7941393c-e3f2-4002-b948-b9ce20653d5a] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018802632s
addons_test.go:316: (dbg) Run:  kubectl --context addons-897988 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-897988 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-897988 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.334599277s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 ip
2023/09/19 16:38:42 [DEBUG] GET http://192.168.39.206:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ptfn9" [a863c599-607f-44cc-ba0d-0fac951a4713] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.042681969s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-897988
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-897988: (6.221254758s)
--- PASS: TestAddons/parallel/InspektorGadget (11.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 25.615006ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-l5q8s" [543e5e73-a2c2-45fa-a365-1c30b46a6ed9] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.022581778s
addons_test.go:391: (dbg) Run:  kubectl --context addons-897988 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-897988 addons disable metrics-server --alsologtostderr -v=1: (1.007956071s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (25.57s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 26.123029ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-v8tgw" [a7a928d8-d956-4a81-a86d-bc13b9070b40] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.018696883s
addons_test.go:449: (dbg) Run:  kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.023870457s)
addons_test.go:454: kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:449: (dbg) Run:  kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.733979546s)
addons_test.go:454: kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:449: (dbg) Run:  kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-897988 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.060277084s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (25.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.683292ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-897988 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-897988 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [75d35f40-176e-43ba-a4e4-47f4bbe35953] Pending
helpers_test.go:344: "task-pv-pod" [75d35f40-176e-43ba-a4e4-47f4bbe35953] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [75d35f40-176e-43ba-a4e4-47f4bbe35953] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.026736236s
addons_test.go:560: (dbg) Run:  kubectl --context addons-897988 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-897988 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-897988 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-897988 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-897988 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-897988 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-897988 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-897988 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-897988 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c6603ca9-308e-4bfe-827c-79281fff70ca] Pending
helpers_test.go:344: "task-pv-pod-restore" [c6603ca9-308e-4bfe-827c-79281fff70ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c6603ca9-308e-4bfe-827c-79281fff70ca] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.018961647s
addons_test.go:602: (dbg) Run:  kubectl --context addons-897988 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-897988 delete pod task-pv-pod-restore: (1.378215619s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-897988 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-897988 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-897988 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.803541544s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-897988 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-897988 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-897988 --alsologtostderr -v=1: (2.003206429s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-zm88j" [960afd60-f5d7-4309-8163-df6fb3d4fb88] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-zm88j" [960afd60-f5d7-4309-8163-df6fb3d4fb88] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-zm88j" [960afd60-f5d7-4309-8163-df6fb3d4fb88] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.015266602s
--- PASS: TestAddons/parallel/Headlamp (17.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-hqfz7" [64e1732c-7a0a-4260-8849-6c41fad6e0e4] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.483929218s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-897988
--- PASS: TestAddons/parallel/CloudSpanner (6.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-897988 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-897988 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (66.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-512928 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-512928 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m5.183405197s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-512928 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-512928 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-512928 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-512928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-512928
--- PASS: TestCertOptions (66.59s)

                                                
                                    
x
+
TestCertExpiration (303.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-142729 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-142729 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m4.449264643s)
E0919 17:36:14.060576   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-142729 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-142729 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (58.302902967s)
helpers_test.go:175: Cleaning up "cert-expiration-142729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-142729
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-142729: (1.037954141s)
--- PASS: TestCertExpiration (303.79s)

                                                
                                    
x
+
TestForceSystemdFlag (60.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-212057 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-212057 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.628334738s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-212057 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-212057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-212057
--- PASS: TestForceSystemdFlag (60.80s)

                                                
                                    
x
+
TestForceSystemdEnv (84.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-367630 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-367630 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.902211598s)
helpers_test.go:175: Cleaning up "force-systemd-env-367630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-367630
E0919 17:37:56.281490   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-367630: (1.015575681s)
--- PASS: TestForceSystemdEnv (84.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.11s)

                                                
                                    
x
+
TestErrorSpam/setup (46.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-052562 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-052562 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-052562 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-052562 --driver=kvm2  --container-runtime=crio: (46.643220955s)
--- PASS: TestErrorSpam/setup (46.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (2.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 stop: (2.069650159s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-052562 --log_dir /tmp/nospam-052562 stop
--- PASS: TestErrorSpam/stop (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17240-6042/.minikube/files/etc/test/nested/copy/13239/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225429 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-225429 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.668509495s)
--- PASS: TestFunctional/serial/StartWithProxy (88.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225429 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-225429 --alsologtostderr -v=8: (31.152958121s)
functional_test.go:659: soft start took 31.153631251s for "functional-225429" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-225429 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 cache add registry.k8s.io/pause:3.1: (1.00024046s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 cache add registry.k8s.io/pause:3.3: (1.014832642s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 cache add registry.k8s.io/pause:latest: (1.039021135s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-225429 /tmp/TestFunctionalserialCacheCmdcacheadd_local4228674061/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cache add minikube-local-cache-test:functional-225429
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 cache add minikube-local-cache-test:functional-225429: (1.940746217s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cache delete minikube-local-cache-test:functional-225429
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-225429
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.60775ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 kubectl -- --context functional-225429 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-225429 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225429 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-225429 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.731768131s)
functional_test.go:757: restart took 37.731890931s for "functional-225429" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-225429 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 logs: (1.506504588s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 logs --file /tmp/TestFunctionalserialLogsFileCmd3721524137/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 logs --file /tmp/TestFunctionalserialLogsFileCmd3721524137/001/logs.txt: (1.463370888s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-225429 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-225429
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-225429: exit status 115 (286.925835ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.71:32434 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-225429 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-225429 delete -f testdata/invalidsvc.yaml: (1.208519131s)
--- PASS: TestFunctional/serial/InvalidService (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 config get cpus: exit status 14 (47.252085ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 config get cpus: exit status 14 (38.090991ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-225429 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-225429 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19461: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.54s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225429 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-225429 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (135.911662ms)

                                                
                                                
-- stdout --
	* [functional-225429] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 16:47:58.565523   19326 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:47:58.565814   19326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:47:58.565826   19326 out.go:309] Setting ErrFile to fd 2...
	I0919 16:47:58.565833   19326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:47:58.566105   19326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 16:47:58.566668   19326 out.go:303] Setting JSON to false
	I0919 16:47:58.569772   19326 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1829,"bootTime":1695140250,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:47:58.569915   19326 start.go:138] virtualization: kvm guest
	I0919 16:47:58.572227   19326 out.go:177] * [functional-225429] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:47:58.574042   19326 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:47:58.573994   19326 notify.go:220] Checking for updates...
	I0919 16:47:58.576253   19326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:47:58.578456   19326 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:47:58.579864   19326 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:47:58.581192   19326 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:47:58.582545   19326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:47:58.584497   19326 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:47:58.585238   19326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:47:58.585298   19326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:47:58.601453   19326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
	I0919 16:47:58.601908   19326 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:47:58.602643   19326 main.go:141] libmachine: Using API Version  1
	I0919 16:47:58.602661   19326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:47:58.603138   19326 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:47:58.603337   19326 main.go:141] libmachine: (functional-225429) Calling .DriverName
	I0919 16:47:58.603676   19326 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:47:58.604108   19326 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:47:58.604250   19326 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:47:58.619295   19326 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0919 16:47:58.619767   19326 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:47:58.620338   19326 main.go:141] libmachine: Using API Version  1
	I0919 16:47:58.620367   19326 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:47:58.620664   19326 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:47:58.620857   19326 main.go:141] libmachine: (functional-225429) Calling .DriverName
	I0919 16:47:58.653531   19326 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 16:47:58.654930   19326 start.go:298] selected driver: kvm2
	I0919 16:47:58.654949   19326 start.go:902] validating driver "kvm2" against &{Name:functional-225429 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-225429 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.71 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:47:58.655123   19326 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:47:58.657295   19326 out.go:177] 
	W0919 16:47:58.658613   19326 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 16:47:58.659910   19326 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225429 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-225429 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-225429 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (129.97932ms)

                                                
                                                
-- stdout --
	* [functional-225429] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 16:47:58.432510   19289 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:47:58.432633   19289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:47:58.432642   19289 out.go:309] Setting ErrFile to fd 2...
	I0919 16:47:58.432647   19289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:47:58.432919   19289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 16:47:58.433440   19289 out.go:303] Setting JSON to false
	I0919 16:47:58.434391   19289 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1828,"bootTime":1695140250,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:47:58.434473   19289 start.go:138] virtualization: kvm guest
	I0919 16:47:58.436844   19289 out.go:177] * [functional-225429] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0919 16:47:58.438915   19289 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:47:58.438934   19289 notify.go:220] Checking for updates...
	I0919 16:47:58.440345   19289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:47:58.441855   19289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 16:47:58.443349   19289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 16:47:58.445493   19289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:47:58.446909   19289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:47:58.448622   19289 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 16:47:58.449003   19289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:47:58.449045   19289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:47:58.463291   19289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0919 16:47:58.463697   19289 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:47:58.464217   19289 main.go:141] libmachine: Using API Version  1
	I0919 16:47:58.464242   19289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:47:58.464700   19289 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:47:58.464932   19289 main.go:141] libmachine: (functional-225429) Calling .DriverName
	I0919 16:47:58.465199   19289 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:47:58.465614   19289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 16:47:58.465662   19289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:47:58.480001   19289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0919 16:47:58.480376   19289 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:47:58.480857   19289 main.go:141] libmachine: Using API Version  1
	I0919 16:47:58.480883   19289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:47:58.481194   19289 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:47:58.481377   19289 main.go:141] libmachine: (functional-225429) Calling .DriverName
	I0919 16:47:58.515624   19289 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0919 16:47:58.517250   19289 start.go:298] selected driver: kvm2
	I0919 16:47:58.517269   19289 start.go:902] validating driver "kvm2" against &{Name:functional-225429 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-225429 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.71 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:47:58.517438   19289 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:47:58.520277   19289 out.go:177] 
	W0919 16:47:58.521733   19289 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 16:47:58.523521   19289 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-225429 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-225429 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-4dq2m" [614f25ec-6a21-44d1-8671-2274bae00a14] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0919 16:48:22.542679   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-55497b8b78-4dq2m" [614f25ec-6a21-44d1-8671-2274bae00a14] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.014825695s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.71:30100
functional_test.go:1674: http://192.168.50.71:30100: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-4dq2m

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.71:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.71:30100
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (59.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0e61168c-88d6-482b-82e6-22846f5a2e41] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.032297988s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-225429 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-225429 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-225429 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-225429 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-225429 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3954e7a6-a357-42c5-ba82-dffa4fc98712] Pending
helpers_test.go:344: "sp-pod" [3954e7a6-a357-42c5-ba82-dffa4fc98712] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3954e7a6-a357-42c5-ba82-dffa4fc98712] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.024899247s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-225429 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-225429 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-225429 delete -f testdata/storage-provisioner/pod.yaml: (4.105551456s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-225429 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [66eabf17-42b9-4a3a-8b30-bc8f4c1d7a25] Pending
helpers_test.go:344: "sp-pod" [66eabf17-42b9-4a3a-8b30-bc8f4c1d7a25] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0919 16:48:41.745615   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [66eabf17-42b9-4a3a-8b30-bc8f4c1d7a25] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.026993601s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-225429 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (59.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh -n functional-225429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 cp functional-225429:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2185719640/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh -n functional-225429 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-225429 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-6pnqx" [3ef1795f-68f3-4a69-97a1-cc8118d7b578] Pending
helpers_test.go:344: "mysql-859648c796-6pnqx" [3ef1795f-68f3-4a69-97a1-cc8118d7b578] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-6pnqx" [3ef1795f-68f3-4a69-97a1-cc8118d7b578] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 37.020215443s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-225429 exec mysql-859648c796-6pnqx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-225429 exec mysql-859648c796-6pnqx -- mysql -ppassword -e "show databases;": exit status 1 (200.370602ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-225429 exec mysql-859648c796-6pnqx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-225429 exec mysql-859648c796-6pnqx -- mysql -ppassword -e "show databases;": exit status 1 (147.268957ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-225429 exec mysql-859648c796-6pnqx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13239/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /etc/test/nested/copy/13239/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13239.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /etc/ssl/certs/13239.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13239.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /usr/share/ca-certificates/13239.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/132392.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /etc/ssl/certs/132392.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/132392.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /usr/share/ca-certificates/132392.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-225429 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh "sudo systemctl is-active docker": exit status 1 (255.508673ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh "sudo systemctl is-active containerd": exit status 1 (227.157298ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-225429 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-225429 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-58wqh" [85faa945-75aa-4573-b9a1-c913cbaf431a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-58wqh" [85faa945-75aa-4573-b9a1-c913cbaf431a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.028289984s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "241.289588ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "37.263513ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "235.865401ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "40.953374ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdany-port567898981/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695142077184790940" to /tmp/TestFunctionalparallelMountCmdany-port567898981/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695142077184790940" to /tmp/TestFunctionalparallelMountCmdany-port567898981/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695142077184790940" to /tmp/TestFunctionalparallelMountCmdany-port567898981/001/test-1695142077184790940
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.963393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 16:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 16:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 16:47 test-1695142077184790940
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh cat /mount-9p/test-1695142077184790940
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-225429 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7af2996c-fd50-4606-9fec-aa6c2d44e283] Pending
helpers_test.go:344: "busybox-mount" [7af2996c-fd50-4606-9fec-aa6c2d44e283] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7af2996c-fd50-4606-9fec-aa6c2d44e283] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7af2996c-fd50-4606-9fec-aa6c2d44e283] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.029339975s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-225429 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdany-port567898981/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdspecific-port2331229038/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.879517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdspecific-port2331229038/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh "sudo umount -f /mount-9p": exit status 1 (202.900885ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-225429 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdspecific-port2331229038/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 service list -o json
functional_test.go:1493: Took "496.327425ms" to run "out/minikube-linux-amd64 -p functional-225429 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.71:31820
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4143496519/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4143496519/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4143496519/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T" /mount1: exit status 1 (303.981278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-225429 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4143496519/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4143496519/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-225429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4143496519/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.71:31820
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 version -o=json --components: (1.277860757s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225429 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-225429
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-225429
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-225429 image ls --format short --alsologtostderr:
I0919 16:48:46.589474   21158 out.go:296] Setting OutFile to fd 1 ...
I0919 16:48:46.589575   21158 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:46.589587   21158 out.go:309] Setting ErrFile to fd 2...
I0919 16:48:46.589594   21158 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:46.589799   21158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
I0919 16:48:46.590354   21158 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:46.590459   21158 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:46.590820   21158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:46.590883   21158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:46.605382   21158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33247
I0919 16:48:46.605810   21158 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:46.606398   21158 main.go:141] libmachine: Using API Version  1
I0919 16:48:46.606435   21158 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:46.606766   21158 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:46.606937   21158 main.go:141] libmachine: (functional-225429) Calling .GetState
I0919 16:48:46.608864   21158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:46.608901   21158 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:46.625648   21158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
I0919 16:48:46.626005   21158 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:46.626457   21158 main.go:141] libmachine: Using API Version  1
I0919 16:48:46.626488   21158 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:46.626992   21158 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:46.627195   21158 main.go:141] libmachine: (functional-225429) Calling .DriverName
I0919 16:48:46.627381   21158 ssh_runner.go:195] Run: systemctl --version
I0919 16:48:46.627411   21158 main.go:141] libmachine: (functional-225429) Calling .GetSSHHostname
I0919 16:48:46.630297   21158 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:46.630755   21158 main.go:141] libmachine: (functional-225429) Calling .GetSSHPort
I0919 16:48:46.630824   21158 main.go:141] libmachine: (functional-225429) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:de:81", ip: ""} in network mk-functional-225429: {Iface:virbr1 ExpiryTime:2023-09-19 17:45:19 +0000 UTC Type:0 Mac:52:54:00:fa:de:81 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:functional-225429 Clientid:01:52:54:00:fa:de:81}
I0919 16:48:46.630865   21158 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined IP address 192.168.50.71 and MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:46.630960   21158 main.go:141] libmachine: (functional-225429) Calling .GetSSHKeyPath
I0919 16:48:46.631122   21158 main.go:141] libmachine: (functional-225429) Calling .GetSSHUsername
I0919 16:48:46.631265   21158 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/functional-225429/id_rsa Username:docker}
I0919 16:48:46.727151   21158 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 16:48:46.811005   21158 main.go:141] libmachine: Making call to close driver server
I0919 16:48:46.811017   21158 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:46.811343   21158 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:46.811367   21158 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:46.811384   21158 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:48:46.811402   21158 main.go:141] libmachine: Making call to close driver server
I0919 16:48:46.811415   21158 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:46.811643   21158 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:46.811678   21158 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:46.811695   21158 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225429 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-225429  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-225429  | 53a6e28df2445 | 3.34kB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | cdcab12b2dd16 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.28.2            | c120fed2beb84 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 7a5d9d67a13f6 | 61.5MB |
| docker.io/library/nginx                 | latest             | f5a6b296b8a29 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 55f13c92defb1 | 123MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-225429 image ls --format table --alsologtostderr:
I0919 16:48:47.173998   21272 out.go:296] Setting OutFile to fd 1 ...
I0919 16:48:47.174331   21272 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:47.174346   21272 out.go:309] Setting ErrFile to fd 2...
I0919 16:48:47.174354   21272 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:47.174669   21272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
I0919 16:48:47.175438   21272 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:47.175592   21272 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:47.176136   21272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:47.176198   21272 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:47.191170   21272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
I0919 16:48:47.191611   21272 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:47.192203   21272 main.go:141] libmachine: Using API Version  1
I0919 16:48:47.192230   21272 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:47.192579   21272 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:47.192804   21272 main.go:141] libmachine: (functional-225429) Calling .GetState
I0919 16:48:47.194592   21272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:47.194627   21272 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:47.210514   21272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
I0919 16:48:47.210921   21272 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:47.211405   21272 main.go:141] libmachine: Using API Version  1
I0919 16:48:47.211428   21272 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:47.211779   21272 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:47.211951   21272 main.go:141] libmachine: (functional-225429) Calling .DriverName
I0919 16:48:47.212138   21272 ssh_runner.go:195] Run: systemctl --version
I0919 16:48:47.212164   21272 main.go:141] libmachine: (functional-225429) Calling .GetSSHHostname
I0919 16:48:47.214592   21272 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:47.214911   21272 main.go:141] libmachine: (functional-225429) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:de:81", ip: ""} in network mk-functional-225429: {Iface:virbr1 ExpiryTime:2023-09-19 17:45:19 +0000 UTC Type:0 Mac:52:54:00:fa:de:81 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:functional-225429 Clientid:01:52:54:00:fa:de:81}
I0919 16:48:47.214956   21272 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined IP address 192.168.50.71 and MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:47.215010   21272 main.go:141] libmachine: (functional-225429) Calling .GetSSHPort
I0919 16:48:47.215190   21272 main.go:141] libmachine: (functional-225429) Calling .GetSSHKeyPath
I0919 16:48:47.215309   21272 main.go:141] libmachine: (functional-225429) Calling .GetSSHUsername
I0919 16:48:47.215444   21272 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/functional-225429/id_rsa Username:docker}
I0919 16:48:47.304167   21272 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 16:48:47.391739   21272 main.go:141] libmachine: Making call to close driver server
I0919 16:48:47.391766   21272 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:47.392030   21272 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:47.392065   21272 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:47.392087   21272 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:48:47.392111   21272 main.go:141] libmachine: Making call to close driver server
I0919 16:48:47.392124   21272 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:47.392339   21272 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:47.392356   21272 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:47.392361   21272 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225429 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c
441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820093"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["r
egistry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142
372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-225429"],"size":"34114467"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4","registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051
f6436f39d22a1def682e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"123171638"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"53a6e28df24457a920d421a117d5a66ee8a161ab4a859f08330412461d9704ff","repoDigests":["localhost/minikube-local-cache-test@sha256:e3eb63d388de3c87a118c0b6e3693187905202f0d0ce75ab876d3cd69f106be2"],"repoTags":["localhost/minikube-local-cache-test:functional-225429"],"size":"3343"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"127149008"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"74687895"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube
-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"61485878"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-225429 image ls --format json --alsologtostderr:
I0919 16:48:46.895448   21214 out.go:296] Setting OutFile to fd 1 ...
I0919 16:48:46.895598   21214 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:46.895611   21214 out.go:309] Setting ErrFile to fd 2...
I0919 16:48:46.895619   21214 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:46.895885   21214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
I0919 16:48:46.896624   21214 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:46.896736   21214 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:46.897255   21214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:46.897312   21214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:46.911365   21214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
I0919 16:48:46.911845   21214 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:46.912402   21214 main.go:141] libmachine: Using API Version  1
I0919 16:48:46.912458   21214 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:46.912871   21214 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:46.913033   21214 main.go:141] libmachine: (functional-225429) Calling .GetState
I0919 16:48:46.915316   21214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:46.915355   21214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:46.933514   21214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
I0919 16:48:46.933939   21214 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:46.934407   21214 main.go:141] libmachine: Using API Version  1
I0919 16:48:46.934439   21214 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:46.934815   21214 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:46.935005   21214 main.go:141] libmachine: (functional-225429) Calling .DriverName
I0919 16:48:46.935199   21214 ssh_runner.go:195] Run: systemctl --version
I0919 16:48:46.935225   21214 main.go:141] libmachine: (functional-225429) Calling .GetSSHHostname
I0919 16:48:46.938409   21214 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:46.938897   21214 main.go:141] libmachine: (functional-225429) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:de:81", ip: ""} in network mk-functional-225429: {Iface:virbr1 ExpiryTime:2023-09-19 17:45:19 +0000 UTC Type:0 Mac:52:54:00:fa:de:81 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:functional-225429 Clientid:01:52:54:00:fa:de:81}
I0919 16:48:46.938914   21214 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined IP address 192.168.50.71 and MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:46.939127   21214 main.go:141] libmachine: (functional-225429) Calling .GetSSHPort
I0919 16:48:46.939302   21214 main.go:141] libmachine: (functional-225429) Calling .GetSSHKeyPath
I0919 16:48:46.939454   21214 main.go:141] libmachine: (functional-225429) Calling .GetSSHUsername
I0919 16:48:46.939570   21214 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/functional-225429/id_rsa Username:docker}
I0919 16:48:47.036898   21214 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 16:48:47.121213   21214 main.go:141] libmachine: Making call to close driver server
I0919 16:48:47.121229   21214 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:47.121472   21214 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:47.121489   21214 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:48:47.121505   21214 main.go:141] libmachine: Making call to close driver server
I0919 16:48:47.121517   21214 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:47.121786   21214 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:47.121798   21214 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:47.121824   21214 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225429 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
- registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "123171638"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "61485878"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a
repoTags:
- docker.io/library/nginx:latest
size: "190820093"
- id: 53a6e28df24457a920d421a117d5a66ee8a161ab4a859f08330412461d9704ff
repoDigests:
- localhost/minikube-local-cache-test@sha256:e3eb63d388de3c87a118c0b6e3693187905202f0d0ce75ab876d3cd69f106be2
repoTags:
- localhost/minikube-local-cache-test:functional-225429
size: "3343"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "127149008"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "74687895"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-225429
size: "34114467"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-225429 image ls --format yaml --alsologtostderr:
I0919 16:48:46.590936   21159 out.go:296] Setting OutFile to fd 1 ...
I0919 16:48:46.591047   21159 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:46.591056   21159 out.go:309] Setting ErrFile to fd 2...
I0919 16:48:46.591061   21159 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:46.591273   21159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
I0919 16:48:46.591816   21159 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:46.591917   21159 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:46.592284   21159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:46.592335   21159 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:46.605566   21159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
I0919 16:48:46.606107   21159 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:46.606676   21159 main.go:141] libmachine: Using API Version  1
I0919 16:48:46.606702   21159 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:46.607015   21159 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:46.607146   21159 main.go:141] libmachine: (functional-225429) Calling .GetState
I0919 16:48:46.609325   21159 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:46.609366   21159 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:46.623200   21159 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
I0919 16:48:46.623683   21159 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:46.624196   21159 main.go:141] libmachine: Using API Version  1
I0919 16:48:46.624221   21159 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:46.624565   21159 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:46.624748   21159 main.go:141] libmachine: (functional-225429) Calling .DriverName
I0919 16:48:46.624924   21159 ssh_runner.go:195] Run: systemctl --version
I0919 16:48:46.624948   21159 main.go:141] libmachine: (functional-225429) Calling .GetSSHHostname
I0919 16:48:46.627935   21159 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:46.628301   21159 main.go:141] libmachine: (functional-225429) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:de:81", ip: ""} in network mk-functional-225429: {Iface:virbr1 ExpiryTime:2023-09-19 17:45:19 +0000 UTC Type:0 Mac:52:54:00:fa:de:81 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:functional-225429 Clientid:01:52:54:00:fa:de:81}
I0919 16:48:46.628372   21159 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined IP address 192.168.50.71 and MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:46.628430   21159 main.go:141] libmachine: (functional-225429) Calling .GetSSHPort
I0919 16:48:46.628597   21159 main.go:141] libmachine: (functional-225429) Calling .GetSSHKeyPath
I0919 16:48:46.628725   21159 main.go:141] libmachine: (functional-225429) Calling .GetSSHUsername
I0919 16:48:46.628856   21159 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/functional-225429/id_rsa Username:docker}
I0919 16:48:46.751304   21159 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 16:48:46.847125   21159 main.go:141] libmachine: Making call to close driver server
I0919 16:48:46.847148   21159 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:46.847807   21159 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:46.847864   21159 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:46.847875   21159 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:48:46.847890   21159 main.go:141] libmachine: Making call to close driver server
I0919 16:48:46.847903   21159 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:46.848170   21159 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:46.848195   21159 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-225429 ssh pgrep buildkitd: exit status 1 (233.195374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image build -t localhost/my-image:functional-225429 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image build -t localhost/my-image:functional-225429 testdata/build --alsologtostderr: (4.301548996s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-225429 image build -t localhost/my-image:functional-225429 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8bcdf324067
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-225429
--> 41769f32bbe
Successfully tagged localhost/my-image:functional-225429
41769f32bbe83dd70029ab5cbe8cc0fa176902d6381a0e91bd499865c168cd5a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-225429 image build -t localhost/my-image:functional-225429 testdata/build --alsologtostderr:
I0919 16:48:47.095827   21254 out.go:296] Setting OutFile to fd 1 ...
I0919 16:48:47.096005   21254 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:47.096017   21254 out.go:309] Setting ErrFile to fd 2...
I0919 16:48:47.096024   21254 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:48:47.096295   21254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
I0919 16:48:47.097043   21254 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:47.097538   21254 config.go:182] Loaded profile config "functional-225429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0919 16:48:47.097933   21254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:47.097979   21254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:47.112116   21254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43707
I0919 16:48:47.112552   21254 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:47.113173   21254 main.go:141] libmachine: Using API Version  1
I0919 16:48:47.113202   21254 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:47.113554   21254 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:47.113758   21254 main.go:141] libmachine: (functional-225429) Calling .GetState
I0919 16:48:47.115371   21254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 16:48:47.115415   21254 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:48:47.130836   21254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
I0919 16:48:47.131229   21254 main.go:141] libmachine: () Calling .GetVersion
I0919 16:48:47.131704   21254 main.go:141] libmachine: Using API Version  1
I0919 16:48:47.131718   21254 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:48:47.132078   21254 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:48:47.132301   21254 main.go:141] libmachine: (functional-225429) Calling .DriverName
I0919 16:48:47.132553   21254 ssh_runner.go:195] Run: systemctl --version
I0919 16:48:47.132585   21254 main.go:141] libmachine: (functional-225429) Calling .GetSSHHostname
I0919 16:48:47.135348   21254 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:47.135757   21254 main.go:141] libmachine: (functional-225429) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:de:81", ip: ""} in network mk-functional-225429: {Iface:virbr1 ExpiryTime:2023-09-19 17:45:19 +0000 UTC Type:0 Mac:52:54:00:fa:de:81 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:functional-225429 Clientid:01:52:54:00:fa:de:81}
I0919 16:48:47.135795   21254 main.go:141] libmachine: (functional-225429) DBG | domain functional-225429 has defined IP address 192.168.50.71 and MAC address 52:54:00:fa:de:81 in network mk-functional-225429
I0919 16:48:47.135934   21254 main.go:141] libmachine: (functional-225429) Calling .GetSSHPort
I0919 16:48:47.136099   21254 main.go:141] libmachine: (functional-225429) Calling .GetSSHKeyPath
I0919 16:48:47.136293   21254 main.go:141] libmachine: (functional-225429) Calling .GetSSHUsername
I0919 16:48:47.136451   21254 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/functional-225429/id_rsa Username:docker}
I0919 16:48:47.230971   21254 build_images.go:151] Building image from path: /tmp/build.3109787831.tar
I0919 16:48:47.231041   21254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 16:48:47.245046   21254 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3109787831.tar
I0919 16:48:47.251190   21254 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3109787831.tar: stat -c "%s %y" /var/lib/minikube/build/build.3109787831.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3109787831.tar': No such file or directory
I0919 16:48:47.251219   21254 ssh_runner.go:362] scp /tmp/build.3109787831.tar --> /var/lib/minikube/build/build.3109787831.tar (3072 bytes)
I0919 16:48:47.276795   21254 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3109787831
I0919 16:48:47.290134   21254 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3109787831 -xf /var/lib/minikube/build/build.3109787831.tar
I0919 16:48:47.305360   21254 crio.go:297] Building image: /var/lib/minikube/build/build.3109787831
I0919 16:48:47.305431   21254 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-225429 /var/lib/minikube/build/build.3109787831 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0919 16:48:51.313736   21254 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-225429 /var/lib/minikube/build/build.3109787831 --cgroup-manager=cgroupfs: (4.00826763s)
I0919 16:48:51.313804   21254 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3109787831
I0919 16:48:51.333620   21254 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3109787831.tar
I0919 16:48:51.344782   21254 build_images.go:207] Built localhost/my-image:functional-225429 from /tmp/build.3109787831.tar
I0919 16:48:51.344810   21254 build_images.go:123] succeeded building to: functional-225429
I0919 16:48:51.344814   21254 build_images.go:124] failed building to: 
I0919 16:48:51.344830   21254 main.go:141] libmachine: Making call to close driver server
I0919 16:48:51.344848   21254 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:51.345174   21254 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:51.345210   21254 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:48:51.345229   21254 main.go:141] libmachine: Making call to close driver server
I0919 16:48:51.345231   21254 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:51.345242   21254 main.go:141] libmachine: (functional-225429) Calling .Close
I0919 16:48:51.345513   21254 main.go:141] libmachine: (functional-225429) DBG | Closing plugin on server side
I0919 16:48:51.345546   21254 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:48:51.345562   21254 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.288399688s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-225429
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image load --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image load --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr: (4.931000504s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image load --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr
E0919 16:48:21.263166   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.268867   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.279125   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.299424   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.339729   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.420060   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.581175   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:48:21.902103   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
2023/09/19 16:48:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image load --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr: (2.299494791s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0919 16:48:23.823560   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.10016404s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-225429
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image load --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr
E0919 16:48:26.384468   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image load --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr: (4.777544628s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image save gcr.io/google-containers/addon-resizer:functional-225429 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image save gcr.io/google-containers/addon-resizer:functional-225429 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.161684729s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image rm gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image ls
E0919 16:48:31.505091   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-225429
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-225429 image save --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-225429 image save --daemon gcr.io/google-containers/addon-resizer:functional-225429 --alsologtostderr: (9.505493576s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-225429
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.54s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-225429
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-225429
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-225429
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (120.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-845293 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0919 16:49:02.225914   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:49:43.186248   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-845293 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m0.171374196s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (120.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons enable ingress --alsologtostderr -v=5
E0919 16:51:05.107397   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons enable ingress --alsologtostderr -v=5: (16.430020226s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-845293 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-685865 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0919 16:54:18.204571   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 16:55:40.127229   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-685865 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.514712063s)
--- PASS: TestJSONOutput/start/Command (99.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-685865 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-685865 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-685865 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-685865 --output=json --user=testUser: (7.088178606s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-477533 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-477533 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (53.360706ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f11e711a-a711-4671-9a22-c44624f009fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-477533] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5d485f8-2e54-42f5-a361-d0799f74b00e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17240"}}
	{"specversion":"1.0","id":"3e32ba2c-abb6-4182-aae0-323416405a83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"01e7595c-c7fa-427f-bcf3-dfc7f9e1730b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig"}}
	{"specversion":"1.0","id":"61a5edd1-a1d9-4457-bf12-c7f710bec68e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube"}}
	{"specversion":"1.0","id":"834ed35f-8d54-4af5-b614-5454f0221c37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"844a2690-9b51-4971-b818-c8953047b8d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"73b91ff5-b74a-4bb2-9751-8790e6193f6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-477533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-477533
--- PASS: TestErrorJSONOutput (0.17s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (94.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-645769 --driver=kvm2  --container-runtime=crio
E0919 16:56:14.061733   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.067019   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.077266   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.097530   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.137783   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.218089   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.378512   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:14.699091   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:15.340029   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:16.620513   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:19.181363   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:24.301584   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:56:34.542680   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-645769 --driver=kvm2  --container-runtime=crio: (46.401629833s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-647791 --driver=kvm2  --container-runtime=crio
E0919 16:56:55.023688   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-647791 --driver=kvm2  --container-runtime=crio: (45.397384085s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-645769
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-647791
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-647791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-647791
helpers_test.go:175: Cleaning up "first-645769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-645769
--- PASS: TestMinikubeProfile (94.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-694120 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0919 16:57:35.984549   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 16:57:56.282798   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-694120 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.953925908s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-694120 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-694120 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-708182 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0919 16:58:21.266377   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 16:58:23.969246   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-708182 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.495349719s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708182 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708182 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-694120 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708182 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708182 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.13s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-708182
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-708182: (1.127620753s)
--- PASS: TestMountStart/serial/Stop (1.13s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-708182
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-708182: (26.024744584s)
--- PASS: TestMountStart/serial/RestartStopped (27.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708182 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-708182 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553715 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0919 16:58:57.905576   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553715 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.389798635s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-553715 -- rollout status deployment/busybox: (4.512364334s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-m9sw8 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-xj8tc -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-m9sw8 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-xj8tc -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-m9sw8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-553715 -- exec busybox-5bc68d56bd-xj8tc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.12s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-553715 -v 3 --alsologtostderr
E0919 17:01:14.060598   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-553715 -v 3 --alsologtostderr: (42.428757781s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.01s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp testdata/cp-test.txt multinode-553715:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715 "sudo cat /home/docker/cp-test.txt"
E0919 17:01:41.746627   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile242511980/001/cp-test_multinode-553715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715:/home/docker/cp-test.txt multinode-553715-m02:/home/docker/cp-test_multinode-553715_multinode-553715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m02 "sudo cat /home/docker/cp-test_multinode-553715_multinode-553715-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715:/home/docker/cp-test.txt multinode-553715-m03:/home/docker/cp-test_multinode-553715_multinode-553715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m03 "sudo cat /home/docker/cp-test_multinode-553715_multinode-553715-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp testdata/cp-test.txt multinode-553715-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile242511980/001/cp-test_multinode-553715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715-m02:/home/docker/cp-test.txt multinode-553715:/home/docker/cp-test_multinode-553715-m02_multinode-553715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715 "sudo cat /home/docker/cp-test_multinode-553715-m02_multinode-553715.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715-m02:/home/docker/cp-test.txt multinode-553715-m03:/home/docker/cp-test_multinode-553715-m02_multinode-553715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m03 "sudo cat /home/docker/cp-test_multinode-553715-m02_multinode-553715-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp testdata/cp-test.txt multinode-553715-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile242511980/001/cp-test_multinode-553715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt multinode-553715:/home/docker/cp-test_multinode-553715-m03_multinode-553715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715 "sudo cat /home/docker/cp-test_multinode-553715-m03_multinode-553715.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 cp multinode-553715-m03:/home/docker/cp-test.txt multinode-553715-m02:/home/docker/cp-test_multinode-553715-m03_multinode-553715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 ssh -n multinode-553715-m02 "sudo cat /home/docker/cp-test_multinode-553715-m03_multinode-553715-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-553715 node stop m03: (2.071749425s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553715 status: exit status 7 (406.888416ms)

                                                
                                                
-- stdout --
	multinode-553715
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-553715-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-553715-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-553715 status --alsologtostderr: exit status 7 (412.46078ms)

                                                
                                                
-- stdout --
	multinode-553715
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-553715-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-553715-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:01:50.415822   28216 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:01:50.416222   28216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:01:50.416238   28216 out.go:309] Setting ErrFile to fd 2...
	I0919 17:01:50.416246   28216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:01:50.416748   28216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:01:50.417316   28216 out.go:303] Setting JSON to false
	I0919 17:01:50.417363   28216 mustload.go:65] Loading cluster: multinode-553715
	I0919 17:01:50.417466   28216 notify.go:220] Checking for updates...
	I0919 17:01:50.417846   28216 config.go:182] Loaded profile config "multinode-553715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:01:50.417864   28216 status.go:255] checking status of multinode-553715 ...
	I0919 17:01:50.418294   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.418362   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.432846   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39203
	I0919 17:01:50.433287   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.433935   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.433960   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.434291   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.434473   28216 main.go:141] libmachine: (multinode-553715) Calling .GetState
	I0919 17:01:50.435986   28216 status.go:330] multinode-553715 host status = "Running" (err=<nil>)
	I0919 17:01:50.436003   28216 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:01:50.436283   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.436318   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.450334   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0919 17:01:50.450714   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.451081   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.451104   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.451363   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.451513   28216 main.go:141] libmachine: (multinode-553715) Calling .GetIP
	I0919 17:01:50.454267   28216 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:01:50.454677   28216 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:01:50.454706   28216 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:01:50.454834   28216 host.go:66] Checking if "multinode-553715" exists ...
	I0919 17:01:50.455100   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.455130   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.468789   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0919 17:01:50.469134   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.469480   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.469503   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.469758   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.469910   28216 main.go:141] libmachine: (multinode-553715) Calling .DriverName
	I0919 17:01:50.470083   28216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 17:01:50.470107   28216 main.go:141] libmachine: (multinode-553715) Calling .GetSSHHostname
	I0919 17:01:50.472285   28216 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:01:50.472636   28216 main.go:141] libmachine: (multinode-553715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:c6:86", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 17:59:13 +0000 UTC Type:0 Mac:52:54:00:01:c6:86 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:multinode-553715 Clientid:01:52:54:00:01:c6:86}
	I0919 17:01:50.472664   28216 main.go:141] libmachine: (multinode-553715) DBG | domain multinode-553715 has defined IP address 192.168.39.38 and MAC address 52:54:00:01:c6:86 in network mk-multinode-553715
	I0919 17:01:50.472786   28216 main.go:141] libmachine: (multinode-553715) Calling .GetSSHPort
	I0919 17:01:50.472933   28216 main.go:141] libmachine: (multinode-553715) Calling .GetSSHKeyPath
	I0919 17:01:50.473073   28216 main.go:141] libmachine: (multinode-553715) Calling .GetSSHUsername
	I0919 17:01:50.473199   28216 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715/id_rsa Username:docker}
	I0919 17:01:50.567459   28216 ssh_runner.go:195] Run: systemctl --version
	I0919 17:01:50.573079   28216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:01:50.586989   28216 kubeconfig.go:92] found "multinode-553715" server: "https://192.168.39.38:8443"
	I0919 17:01:50.587017   28216 api_server.go:166] Checking apiserver status ...
	I0919 17:01:50.587048   28216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:01:50.600735   28216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1122/cgroup
	I0919 17:01:50.610968   28216 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pode33690ea2d34a4cb01de0af39fba7d80/crio-c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5"
	I0919 17:01:50.611040   28216 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pode33690ea2d34a4cb01de0af39fba7d80/crio-c39aa41434d37a253322bb7d2c0a398d72458c10952025636035ffdc9b5743b5/freezer.state
	I0919 17:01:50.620625   28216 api_server.go:204] freezer state: "THAWED"
	I0919 17:01:50.620644   28216 api_server.go:253] Checking apiserver healthz at https://192.168.39.38:8443/healthz ...
	I0919 17:01:50.625804   28216 api_server.go:279] https://192.168.39.38:8443/healthz returned 200:
	ok
	I0919 17:01:50.625824   28216 status.go:421] multinode-553715 apiserver status = Running (err=<nil>)
	I0919 17:01:50.625835   28216 status.go:257] multinode-553715 status: &{Name:multinode-553715 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 17:01:50.625857   28216 status.go:255] checking status of multinode-553715-m02 ...
	I0919 17:01:50.626147   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.626186   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.640455   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I0919 17:01:50.640947   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.641401   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.641424   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.641714   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.641865   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .GetState
	I0919 17:01:50.643320   28216 status.go:330] multinode-553715-m02 host status = "Running" (err=<nil>)
	I0919 17:01:50.643344   28216 host.go:66] Checking if "multinode-553715-m02" exists ...
	I0919 17:01:50.643630   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.643662   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.658359   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0919 17:01:50.658709   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.659143   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.659164   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.659408   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.659586   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .GetIP
	I0919 17:01:50.662252   28216 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:01:50.662687   28216 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:01:50.662717   28216 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:01:50.662802   28216 host.go:66] Checking if "multinode-553715-m02" exists ...
	I0919 17:01:50.663064   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.663113   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.676471   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0919 17:01:50.676768   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.677173   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.677192   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.677433   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.677575   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .DriverName
	I0919 17:01:50.677728   28216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 17:01:50.677744   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHHostname
	I0919 17:01:50.679976   28216 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:01:50.680347   28216 main.go:141] libmachine: (multinode-553715-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:f9:a1", ip: ""} in network mk-multinode-553715: {Iface:virbr1 ExpiryTime:2023-09-19 18:00:19 +0000 UTC Type:0 Mac:52:54:00:b9:f9:a1 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-553715-m02 Clientid:01:52:54:00:b9:f9:a1}
	I0919 17:01:50.680385   28216 main.go:141] libmachine: (multinode-553715-m02) DBG | domain multinode-553715-m02 has defined IP address 192.168.39.11 and MAC address 52:54:00:b9:f9:a1 in network mk-multinode-553715
	I0919 17:01:50.680575   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHPort
	I0919 17:01:50.680725   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHKeyPath
	I0919 17:01:50.680874   28216 main.go:141] libmachine: (multinode-553715-m02) Calling .GetSSHUsername
	I0919 17:01:50.680977   28216 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-6042/.minikube/machines/multinode-553715-m02/id_rsa Username:docker}
	I0919 17:01:50.763537   28216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:01:50.776387   28216 status.go:257] multinode-553715-m02 status: &{Name:multinode-553715-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 17:01:50.776455   28216 status.go:255] checking status of multinode-553715-m03 ...
	I0919 17:01:50.776854   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 17:01:50.776902   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:50.790993   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I0919 17:01:50.791391   28216 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:50.791870   28216 main.go:141] libmachine: Using API Version  1
	I0919 17:01:50.791894   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:50.792160   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:50.792377   28216 main.go:141] libmachine: (multinode-553715-m03) Calling .GetState
	I0919 17:01:50.793944   28216 status.go:330] multinode-553715-m03 host status = "Stopped" (err=<nil>)
	I0919 17:01:50.793955   28216 status.go:343] host is not running, skipping remaining checks
	I0919 17:01:50.793960   28216 status.go:257] multinode-553715-m03 status: &{Name:multinode-553715-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-553715 node start m03 --alsologtostderr: (31.054642518s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-553715 node delete m03: (1.195859442s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553715 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0919 17:17:56.282899   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:18:21.262915   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:21:14.061748   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 17:21:24.308809   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:22:56.283051   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:23:21.266185   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553715 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.975232758s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-553715 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-553715
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553715-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-553715-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (54.035635ms)

                                                
                                                
-- stdout --
	* [multinode-553715-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-553715-m02' is duplicated with machine name 'multinode-553715-m02' in profile 'multinode-553715'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-553715-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-553715-m03 --driver=kvm2  --container-runtime=crio: (48.579745831s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-553715
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-553715: exit status 80 (206.767622ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-553715
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-553715-m03 already exists in multinode-553715-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-553715-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.65s)

                                                
                                    
x
+
TestScheduledStopUnix (118.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-794900 --memory=2048 --driver=kvm2  --container-runtime=crio
E0919 17:29:17.108023   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-794900 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.801499555s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794900 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-794900 -n scheduled-stop-794900
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794900 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794900 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794900 -n scheduled-stop-794900
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-794900
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794900 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-794900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-794900: exit status 7 (55.972298ms)

                                                
                                                
-- stdout --
	scheduled-stop-794900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794900 -n scheduled-stop-794900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794900 -n scheduled-stop-794900: exit status 7 (54.089004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-794900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-794900
--- PASS: TestScheduledStopUnix (118.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (180.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m38.497151063s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-159716
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-159716: (2.295335033s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-159716 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-159716 status --format={{.Host}}: exit status 7 (66.94145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.227746172s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-159716 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (109.612991ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-159716] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-159716
	    minikube start -p kubernetes-upgrade-159716 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1597162 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-159716 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-159716 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (32.34785222s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-159716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-159716
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-159716: (1.066121934s)
--- PASS: TestKubernetesUpgrade (180.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.03s)

                                                
                                    
x
+
TestPause/serial/Start (119.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-169801 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0919 17:32:56.282091   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:33:21.263441   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-169801 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m59.561771443s)
--- PASS: TestPause/serial/Start (119.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372421 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-372421 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (74.56247ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-372421] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372421 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372421 --driver=kvm2  --container-runtime=crio: (49.559842068s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-372421 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372421 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372421 --no-kubernetes --driver=kvm2  --container-runtime=crio: (8.236128607s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-372421 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-372421 status -o json: exit status 2 (234.231497ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-372421","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-372421
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-372421: (1.079634075s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372421 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372421 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.593904957s)
--- PASS: TestNoKubernetes/serial/Start (28.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-372421 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-372421 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.114684ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-372421
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-372421: (1.127899126s)
--- PASS: TestNoKubernetes/serial/Stop (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (92.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-372421 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-372421 --driver=kvm2  --container-runtime=crio: (1m32.162746546s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (92.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-359189
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-372421 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-372421 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.416579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-648984 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-648984 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.116978ms)

                                                
                                                
-- stdout --
	* [false-648984] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:36:47.249283   41488 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:36:47.249549   41488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:47.249560   41488 out.go:309] Setting ErrFile to fd 2...
	I0919 17:36:47.249565   41488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:36:47.249738   41488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-6042/.minikube/bin
	I0919 17:36:47.250280   41488 out.go:303] Setting JSON to false
	I0919 17:36:47.251206   41488 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4757,"bootTime":1695140250,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:36:47.251260   41488 start.go:138] virtualization: kvm guest
	I0919 17:36:47.253487   41488 out.go:177] * [false-648984] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:36:47.255002   41488 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:36:47.256357   41488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:36:47.254996   41488 notify.go:220] Checking for updates...
	I0919 17:36:47.259007   41488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-6042/kubeconfig
	I0919 17:36:47.260510   41488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-6042/.minikube
	I0919 17:36:47.261929   41488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:36:47.263199   41488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:36:47.264909   41488 config.go:182] Loaded profile config "cert-expiration-142729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:47.265002   41488 config.go:182] Loaded profile config "cert-options-512928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:47.265076   41488 config.go:182] Loaded profile config "force-systemd-env-367630": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0919 17:36:47.265151   41488 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:36:47.303333   41488 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 17:36:47.304969   41488 start.go:298] selected driver: kvm2
	I0919 17:36:47.304980   41488 start.go:902] validating driver "kvm2" against <nil>
	I0919 17:36:47.304990   41488 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:36:47.307022   41488 out.go:177] 
	W0919 17:36:47.308400   41488 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0919 17:36:47.309696   41488 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-648984 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-648984" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Sep 2023 17:36:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.96:8443
name: cert-expiration-142729
contexts:
- context:
cluster: cert-expiration-142729
extensions:
- extension:
last-update: Tue, 19 Sep 2023 17:36:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-142729
name: cert-expiration-142729
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-142729
user:
client-certificate: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-expiration-142729/client.crt
client-key: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-expiration-142729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-648984

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-648984"

                                                
                                                
----------------------- debugLogs end: false-648984 [took: 2.468367583s] --------------------------------
helpers_test.go:175: Cleaning up "false-648984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-648984
--- PASS: TestNetworkPlugins/group/false (2.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (148.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-215748 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-215748 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (2m28.341605359s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (148.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-415155 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E0919 17:38:04.309612   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
E0919 17:38:21.264996   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-415155 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m29.615406949s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-415155 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8441e833-a91b-47d4-993d-334d385ed837] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8441e833-a91b-47d4-993d-334d385ed837] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.033795646s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-415155 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-415155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-415155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024259635s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-415155 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-215748 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72bc9155-0cb3-43aa-a192-b0a0308719ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72bc9155-0cb3-43aa-a192-b0a0308719ab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.033680574s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-215748 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-415555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-415555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m6.239371802s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-215748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-215748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096142134s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-215748 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1fe5ecac-dc28-400f-9832-186d228038a1] Pending
helpers_test.go:344: "busybox" [1fe5ecac-dc28-400f-9832-186d228038a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 17:41:14.060259   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1fe5ecac-dc28-400f-9832-186d228038a1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.026177242s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-415555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-415555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040705947s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-415555 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (662.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-415155 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-415155 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (11m2.164727597s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-415155 -n embed-certs-415155
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (662.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (598.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-215748 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E0919 17:42:56.281827   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
E0919 17:43:21.263490   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-215748 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (9m58.524907115s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-215748 -n no-preload-215748
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (598.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (507.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-415555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E0919 17:45:57.108307   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 17:46:14.060336   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-415555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (8m27.162840156s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-415555 -n default-k8s-diff-port-415555
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (507.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-100627 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cd77250-57f7-4b53-adf3-99e02b48facf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9cd77250-57f7-4b53-adf3-99e02b48facf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.034638576s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-100627 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-100627 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-100627 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (725.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-100627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0919 17:51:14.061187   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-100627 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (12m5.683780991s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-100627 -n old-k8s-version-100627
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (725.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-199016 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-199016 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m1.609592153s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0919 18:07:56.281547   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/functional-225429/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m9.874383101s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m17.612743113s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-199016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-199016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.527180945s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-199016 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-199016 --alsologtostderr -v=3: (10.401808888s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-199016 -n newest-cni-199016
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-199016 -n newest-cni-199016: exit status 7 (60.255531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-199016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (72.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-199016 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E0919 18:08:21.263570   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-199016 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m12.393152327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-199016 -n newest-cni-199016
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (72.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-648984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rdcg4" [cdd20d4c-405f-4f01-ac18-639f6f0985ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rdcg4" [cdd20d4c-405f-4f01-ac18-639f6f0985ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.017623917s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-648984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m43.52342492s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-htsgh" [ea4c9ece-8131-43e7-97c9-3071cc896b65] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.02590237s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-648984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xnp8j" [6b808c35-f82a-4ff2-a3b1-965c0bcdb751] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xnp8j" [6b808c35-f82a-4ff2-a3b1-965c0bcdb751] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.014170361s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-199016 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-199016 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-199016 --alsologtostderr -v=1: (1.162599868s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-199016 -n newest-cni-199016
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-199016 -n newest-cni-199016: exit status 2 (280.522461ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-199016 -n newest-cni-199016
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-199016 -n newest-cni-199016: exit status 2 (259.89521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-199016 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-199016 -n newest-cni-199016
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-199016 -n newest-cni-199016
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (100.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m40.507436566s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (100.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-648984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (124.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0919 18:09:58.899567   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:58.904886   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:58.915233   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:58.935586   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:58.975895   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:59.056291   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:59.217071   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:09:59.537623   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:10:00.177899   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:10:01.458248   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m4.179012764s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (124.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (128.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0919 18:10:09.139184   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:10:19.380230   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
E0919 18:10:39.860701   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/no-preload-215748/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m8.195008369s)
--- PASS: TestNetworkPlugins/group/flannel/Start (128.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wswln" [c5fa0fee-b820-4da8-8f71-33309c4e11f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.02679935s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-648984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kcw7b" [6cae9bd7-37f0-4cd3-bdd7-1f881fb48068] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 18:11:09.868984   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:09.874321   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:09.884635   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:09.904958   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:09.945273   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kcw7b" [6cae9bd7-37f0-4cd3-bdd7-1f881fb48068] Running
E0919 18:11:10.026379   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:10.187041   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:10.507516   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
E0919 18:11:11.147738   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.016558109s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-648984 "pgrep -a kubelet"
E0919 18:11:12.428078   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hn77s" [dcfc39db-6873-458b-aa57-09fa375c811a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 18:11:14.060365   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/ingress-addon-legacy-845293/client.crt: no such file or directory
E0919 18:11:14.988546   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-hn77s" [dcfc39db-6873-458b-aa57-09fa375c811a] Running
E0919 18:11:24.311305   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.017366254s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-648984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-648984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0919 18:11:41.833612   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:41.838845   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:41.849107   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:41.869450   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:41.909810   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:41.990161   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:42.151200   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
E0919 18:11:42.471837   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-648984 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m40.271540516s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-648984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z94kz" [453727fe-248f-4206-9cee-78e773f53049] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 18:12:02.323786   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z94kz" [453727fe-248f-4206-9cee-78e773f53049] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.017196171s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-648984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tpmd5" [34f923b2-f8e9-4a90-94fd-46221e73a6a8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020765395s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-648984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fwkq7" [75ab53b1-adf9-499f-ae49-c478080b8c03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 18:12:22.804738   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/old-k8s-version-100627/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fwkq7" [75ab53b1-adf9-499f-ae49-c478080b8c03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.017501738s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-648984 exec deployment/netcat -- nslookup kubernetes.default
E0919 18:12:31.792218   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/default-k8s-diff-port-415555/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-648984 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-648984 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ndqrp" [757bda6b-ec7a-477c-80b1-6586ca060768] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ndqrp" [757bda6b-ec7a-477c-80b1-6586ca060768] Running
E0919 18:13:21.263267   13239 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/addons-897988/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.014121608s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-648984 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-648984 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (36/287)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.2/cached-images 0
13 TestDownloadOnly/v1.28.2/binaries 0
14 TestDownloadOnly/v1.28.2/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.03
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
232 TestStartStop/group/disable-driver-mounts 0.14
250 TestNetworkPlugins/group/kubenet 2.6
258 TestNetworkPlugins/group/cilium 3.15
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-140688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-140688
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-648984 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-648984" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Sep 2023 17:36:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.96:8443
name: cert-expiration-142729
contexts:
- context:
cluster: cert-expiration-142729
extensions:
- extension:
last-update: Tue, 19 Sep 2023 17:36:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-142729
name: cert-expiration-142729
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-142729
user:
client-certificate: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-expiration-142729/client.crt
client-key: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-expiration-142729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-648984

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-648984"

                                                
                                                
----------------------- debugLogs end: kubenet-648984 [took: 2.456946593s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-648984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-648984
--- SKIP: TestNetworkPlugins/group/kubenet (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-648984 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-648984" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17240-6042/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Sep 2023 17:36:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.61.96:8443
name: cert-expiration-142729
contexts:
- context:
cluster: cert-expiration-142729
extensions:
- extension:
last-update: Tue, 19 Sep 2023 17:36:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-142729
name: cert-expiration-142729
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-142729
user:
client-certificate: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-expiration-142729/client.crt
client-key: /home/jenkins/minikube-integration/17240-6042/.minikube/profiles/cert-expiration-142729/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-648984

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-648984" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648984"

                                                
                                                
----------------------- debugLogs end: cilium-648984 [took: 3.003128971s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-648984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-648984
--- SKIP: TestNetworkPlugins/group/cilium (3.15s)

                                                
                                    
Copied to clipboard